Jun 18 2018
Jun 18
June 18th, 2018

Last month, Ithaca College introduced the first version of what will represent the biggest change to the college’s website technology, design, content, and structure in more than a decade—a redesigned and rebuilt site that’s more visually appealing and easier to use.

Over the past year, the college and its partners Four Kitchens and design firm Beyond have been hard at work within a Team Augmentation capacity to support the front-to-back overhaul of Ithaca.edu to better serve the educational and community goals of Ithaca’s students, faculty, and staff. The results of the team’s efforts can be viewed at https://www.ithaca.edu.  

Founded in 1892, Ithaca College is a residential college dedicated to building knowledge and confidence through a continuous cycle of theory, practice, and performance. Home to some 6,500 students, the college offers more than 100 degree programs in its schools of Business, Communications, Humanities and Sciences, Health Sciences and Human Performance, and Music.

Students, faculty, and staff at Ithaca College create an active, inclusive community anchored in a keen desire to make a difference in the local community and the broader world. The college is consistently ranked as one of the nation’s top producers of Fulbright scholars, one of the most LGBTQ+ friendly schools in the country, and one of the top 10 colleges in the Northeast.

We emphasized applying automation and continuous integration to focus the team on the efficient development of creative and easy to test solutions.

On the backend, the team—including members of Ithaca’s dev org working alongside Four Kitchens—built a Drupal 8 site. The transition to Drupal 8 keeps the focus on moving the college to current technology for sustainable success. Four Kitchens emphasized applying automation and continuous integration to focus the team on the efficient development of creative and easy to test solutions. To achieve that, the team set up automation in Circle CI 2.0 as middleware between the GitHub repository and hosting in PantheonGitHub was used throughout the project to implement, automate, and optimize visual regression, advanced communication between systems and a solid workflow using throughout the project to ensure fast and effective release cycles. Learn from the experiences obtained from implementing the automation pipeline in the following posts:

The frontend focused heavily on the Atomic Design approach. The frontend team utilized Emulsify and Pattern Lab to facilitate pattern component-based design and architecture. This again fostered long-term ease of use and success for Ithaca College.

The team worked magic with content migration. Using the brainchild of Web Chef, David Diers, the team devised a plan to migrate of portions of the site one by one. Subsites corresponding to schools or departments were moved from the legacy CMS to special Pantheon multidevs that were built off the live environment. Content managers then performed a moderated adaptation and curation process to ensure legacy content adhered to the new content model. A separate migration process then imported the content from the holding environment into the live site. This process allowed Ithaca College’s many content managers to thoroughly vet the content that would live on the new site and gave them a clear path to completion. Learn more about migrating using Paragraphs here: Migrating Paragraphs in Drupal 8

Steady scrum rhythm, staying agile, and consistently improving along the way.

In addition to the stellar dev work, a large contributor to the project’s success was establishing a steady scrum rhythm, staying agile, and consistently improving along the way. Each individual and unit solidified into a team through daily 15-minute standups, weekly backlog grooming meetings, weekly ‘Developer Showcase Friday’ meetings, regular sprint planning meetings, and biweekly retrospective meetings. This has been such a shining success the internal Ithaca team plans to carry forward this rhythm even after the Web Chefs’ engagement is complete.     

Engineering and Development Specifics

  • Drupal 8 site hosted on Pantheon Elite, with the canonical source of code being GitHub and CircleCI 2.0 as Continuous Integration and Delivery platform
  • Hierarchical and decoupled architecture based mainly on the use of group entities (Group module) and entity references that allowed the creation of subsite-like internal spaces.
  • Selective use of configuration files through the utilization of custom and contrib solutions like Config Split and Config Ignore modules, to create different database projections of a shared codebase.
  • Migration process based on 2 migration groups with an intermediate holding environment for content moderation.
  • Additional migration groups support the indexing of not-yet-migrated, raw legacy content for Solr search, and the events thread, brought through our Localist integration.
  • Living style guide for site editors by integrating twig components with Drupal templates
  • Automated Visual Regression
Aerial view of the Ithaca College campus from the Ithaca College homepage. From the Ithaca College Homepage.

A well-deserved round of kudos goes to the team. As a Team Augmentation project, the success of this project was made possible by the dedicated work and commitment to excellence from the Ithaca College project team. The leadership provided by Dave Cameron as Ithaca Product Manager, Eric Woods as Ithaca Technical Lead and Architect, and John White as Ithaca Dev for all things legacy system related was crucial in the project’s success. Ithaca College’s Katherine Malcuria, Senior Digital User Interface Designer,  led the creation of design elements for the website. 

Katherine Malcuria, Senior Digital User Interface Designer, works on design elements of the Ithaca.edu website

Ithaca Dev Michael Sprague, Web Chef David Diers, Architect,  as well as former Web Chef Chris Ruppel, Frontend Engineer, also stepped in for various periods of time on the project.  At the tail end of the project Web Chef, Brian Lewis, introduced a new baby Web Chef to the world, therefore the amazing Randy Oest, Senior Designer and Frontend Engineer, stepped in to assist in pushing this to the finish line from a front-end dev perspective. James Todd, Engineer, pitched in as ‘jack of all trades’ connoisseur helping out where needed.

The Four Kitchens Team Augmentation team for the Ithaca College project was led by Brandy Jackson, Technical Project Manager, playing the roles of project manager, scrum master, and product owner interchangeably as needed. Joel Travieso, Senior Drupal Engineer, was the technical lead, backend developer, and technical architect. Brian Lewis, Frontend Engineer, meticulously worked magic in implementing intricate design elements that were provided by the Ithaca College design team, as well a 3rd party design firm, Beyond, at different parts of the project.

A final round of kudos goes out to the larger Ithaca project team, from content, to devOps, to quality assurance, there are too many to name. A successful project would not have been possible without their collective efforts as well.

The success of the Ithaca College Website is a great example of excellent team unity and collaboration across multiple avenues. These coordinated efforts are a true example of the phrase “teamwork makes the dream work.” Congratulations to all for a job well done!

Special thanks to Brandy Jackson for her contribution to this launch announcement. 

Four Kitchens

The place to read all about Four Kitchens news, announcements, sports, and weather.

Oct 20 2016
Oct 20

It has been a few years since I have had the opportunity to build a website from the absolute beginning. The most recent project I’m on continues in that vein, but it’s early enough for me to consider ripping it apart and starting all over again. The project is particularly interesting to me, as it’s my first opportunity to use Drupal 8 in earnest. As I’ve got an interest in automating as much as possible, I want to gain a better understanding of the configuration management features which have been introduced in Drupal 8.

Tearing it apart and starting again wasn’t the first thing considered. Being an arrogant Drupal dev, I figured I could simply poke around the GUI and rely on some things I’d seen at Drupalcon and Drupal camps in the past couple of years to see me through. I thought I would find it easy to build a replicated environment so that any new developer could come along, do a git clone, vagrant up, review a README.md file and/or wiki page and they’d be off and running.

Wrong.

This post outlines many of the things that I examined in the process of learning Drupal 8 while adopting a bit of humility. I’ve created a sample project with names changed to protect the innocent. Any comments are welcome.

The structure of the rest of this post is as follows:

Setting up and orientation with Drupal VM

I am a big fan of Jeff Geerling’s Drupal VM vagrant project, so I created a fork of it, and imaginatively called it D8config VM. We will be building a Drupal site with the standard profile which we’ll use to rapidly build a basic prototype using the Drupal GUI - no coding chops necessary. The only contributed module added and enabled is the Devel module at the start, but we will change that quickly.

Here are the prerequisites if you do follow along:

  • familiarity with the command line;
  • familiarity with Vagrant and that it’s installed on your machine (note: the tutorial requires Vagrant 1.8.1+);
  • as well as Vagrant 1.8.1+, you need to have Ansible 2.0.1+ and VirtualBox 5.0.20+ installed;
  • have installed the Vagrant Auto-network plugin with vagrant plugin install vagrant-auto_network. This will help prevent collisions with other virtual networks that may exist on your computer;
  • have installed the Vagrant::Hostsupdater plugin with vagrant plugin install vagrant-hostsupdater, which will manage the host’s /etc/hosts file by adding and removing hostname entries for you;
  • familiarity with git and GitHub;
  • if using Windows, you are comfortable with troubleshooting any issues you might come across, as it’s only been tested on a Mac;
  • familiarity with Drush.

Here is how the D8config VM differs from Drupal VM:

  • the config.yml and drupal.make.yml files have been committed, unlike the normal Drupal VM repo;
  • the hostname and machine name have been changed to d8config.dev and d8config respectively;
  • to take advantage of the auto-network plugin, vagrant_ip is set to 0.0.0.0. The d8config machine will then have an IP address from 10.20.1.2 to 10.20.1.254;
  • the first synced folder is configured with a relative reference to the Vagrant file itself:
# The first synced folder will be used for the default Drupal installation, if
# build_makefile: is 'true'.
- local_path: ../d8config-site         # Changed from ~/Sites/drupalvm
  destination: /var/www/d8config-site  # Changed from /var/www/drupalvm
  type: nfs
  create: true

I’ll do the same with subsequent shared folders as we progress - it’s a useful way to keep the different repos together in one directory. At the end of the tutorial, you’ll have something like:

└── projects
    ├── another_project
    ├── d8config
    │   ├── d8config_profile
    │   ├── d8config-site
    │   └── d8config-vm
    ├── my_project
    └── top_secret_project

top

New nice-to-haves (thanks to recent changes in Drupal VM)

If you have used Drupal VM before but haven’t upgraded in a while, there are a load of new features. Here are just two to note:

  • Ansible roles are now installed locally (in ./provisioning/roles) during the first vagrant up;
  • PHP 7 is now an option to be installed. In fact, it’s installed by default. You can select 5.6 if you like by changing php_version in the config.yml file (see PHP 5.6 on Drupal VM).

Now do this

Create a directory where you’re going to keep all of the project assets.

$ mkdir d8config

# Change to that directory in your terminal:
$ cd d8config

Clone the D8config VM repo. Or, feel free to fork D8config VM and clone your version of the repo.

$ git clone https://github.com/siliconmeadow/d8config-vm.git

# Change to the `d8config-vm` directory:
$ cd d8config-vm

# Checkout the `CG01` branch of the repo:
$ git checkout CG01

# Bring the vagrant machine up and wait for the provisioning to complete.
$ vagrant up

After successful provisioning you should be able to point your browser at http://d8config.dev and see your barebones Drupal 8 site. Username: admin; Password: admin.

Caveats

If you’ve used Drupal VM before, you will want to examine the changes in the latest version. From Drupal VM tag 3.0.0 onwards, the requirements have changed:

  • Vagrant 1.8.1+
  • Ansible 2.0.1+
  • VirtualBox 5.0.20+

One sure sign that you’ll need to upgrade is if you see this message when doing a vagrant up:

ansible provisioner: * The following settings shouldn't exist: galaxy_role_file

top

Build your prototype

In this section of the tutorial we’re going to start building our prototype. The brief is:

The site is a portfolio site for for a large multinational corporation’s internal use. But hopefully the content architecture is simple enough to keep in your head. The following node types need to be set up: Case study, Client, Team member (the subject matter expert), and Technologies used. Set up a vocabulary called Country for countries served and a second to called Sector classify the information which will contain tags such as Government, Music industry, Manufacturing, Professional services, etc. Delete the existing default content types. You can then delete fields you know you won’t need to avoid confusion - Comments for example - which will then allow you to uninstall the comment module. And, as you might deduce by the modules I’ve selected, it’s to be a multilingual site.

Hopefully this should feel comfortable enough for you, if you are familiar with Drupal site building. There are enough specifics for clarity, yet it’s not too prescriptive that you feel someone is telling you how to do your job. Whereas in one context, you may hear a manager say “don’t bring me problems, bring me solutions”, most engineers would rather say for themselves “don’t bring me solutions, bring me problems”. I hope this brief does the latter.

Have a go at making the changes to your vanilla Drupal 8 site based on the brief.

Beyond the brief

Every ‘site building’ exercise with Drupal is a move further away from the configuration provided by the standard or minimal profiles. In our circumstance, we will enable these modules via the GUI:

  • Responsive Image
  • Syslog
  • Testing
  • BigPipe
  • Devel Generate (Devel was installed due to settings in config.yml in the d8config-vm repo)
  • Devel Kint
  • Devel Node Access
  • Web Profiler
  • Configuration Translation
  • Content Translation
  • Interface Translation
  • Language

I’ve also added a couple of contributed themes via Drush so the site will no longer look like the default site.

# While in your d8config-vm directory:
$ vagrant ssh

# Switch to your Drupal installation:
$ cd /var/www/d8config-site/drupal

# Download and install two themes:
$ drush en integrity adminimal_theme -y

For more details on these themes, see the Integrity theme and the Adminimal theme. As you might expect, I set Integrity as the default theme, and Adminimal as the admin theme via the GUI.

After switching themes, two blocks appeared in the wrong regions. I went to the Block layout page and moved the Footer menu block from the main menu region to the footer first region and the Powered by Drupal block from the Main menu to the Sub footer block.

Due to the multilingual implication, I went to the Languages admin page and added French.

top

Replicate and automate

At this stage you’ve made quite a lot of changes to a vanilla Drupal site. There are many reasons you should consider automating the building of this site - to save time when bringing other members into the development process, for creating QA, UAT, pre-prod and production environments, etc. We will now start to examine ways of doing just this.

drush make your life easier

In this section we’re going to create a Drush makefile to get the versions of Drupal core, contrib modules and themes we need to build this site as it currently is. This file will be the first file added to the D8config profile repo. Makefiles are not a required part of a profile, and could reside in a repo of their own. However to keep administration down to a minimum, I’ve found that this is a useful way to simplify some of the asset management for site building.

Let’s first tweak the config.yml in the D8config VM repo, so that we have synced folder for the profile. To do so, either:

  1. git checkout CG02 in the d8config-vm directory (where I’ve already made the changes for you), or;
  2. Add the following to the config.yml in the vagrant_synced_folders section:
# This is so the profile repo can be manipulated on the guest or host.
- local_path: ../d8config_profile
  destination: /build/d8config/d8config_profile
  type: nfs
  create: true

After doing either of the above, do a vagrant reload which will both create the directory on the Vagrant host, and mount it from the d8config-vm guest.

Next, let’s generate a basic makefile from the site as it now is.

# From within the d8config vm:
$ cd /var/www/d8config-site/drupal
$ drush generate-makefile /build/d8config/d8config_profile/d8config.make

This makefile is now available in the d8config_profile directory which is at the same level as your d8config-vm directory when viewing on your host machine.

Because we only have Drupal core, two contrib themes and the Devel module, it’s a very simple file and it doesn’t need any tweaking at this stage. I’ve committed it to the D8config profile repo and tagged it as CG01.

Raising our profile

Since we’ve established that the makefile is doing very little on this site, we need to look at completing the rest of the profile which will apply the configuration changes when building the site. The How to Write a Drupal 8 Installation Profile is quite clear and we’ll use that page to guide us.

First, our machine name has already been chosen, as I’ve called the repo d8config_profile.

Rather than writing the d8config_profile.info.yml file from scratch, let’s duplicate standard.info.yml from the standard profile in Drupal core, as that’s what we used to build the vanilla site to begin with. We can then modify it to reflect what we’ve done since.

# From within the /build/d8config/d8config_profile directory in the vagrant machine:
$ cp /var/www/d8config-site/drupal/core/profiles/standard/standard.info.yml .

$ mv standard.info.yml d8config_profile.info.yml

The first five lines of the d8config_profile.info.yml need to look like this:

name: D8config
type: profile
description: 'For a Capgemini Engineering Blog tutorial.'
core: 8.x
dependencies:

At the end of the file it looks like this, which shows the required core modules and adding the modules and themes we’ve downloaded:

- automated_cron
- responsive_image
- syslog
- simpletest
- big_pipe
- migrate
- migrate_drupal
- migrate_drupal_ui
- devel
- devel_generate
- kint
- devel_node_access
- webprofiler
- config_translation
- content_translation
- locale
- language
themes:
- bartik
- seven
- integrity
- adminimal_theme

Also, don’t forget, we uninstalled the comment module, so I’ve also removed that from the dependencies.

You still need moar!

The profile specifies the modules to be enabled, but not how they’re to be configured. Also, what about the new content types we’ve added? And the taxonomies? With previous versions, we relied on the features module, and perhaps strongarm to manage these tasks. But now, we’re finally getting to the subject of the tutorial - Drupal 8 has a configuration system out of the box.

This is available via the GUI, as well as Drush. Either method allows you to export and import the configuration settings for the whole of your site. And if you look further down the profile how-to page, you will see that we can include configuration with installation profiles.

Let’s export our configuration using Drush. This is will be far more efficient than exporting via the GUI, which downloads a *.tar.gz file, which we’d need to extract a copy or move to the config/install directory of the profile.

While logged into the vagrant machine and inside the site’s root directory:

# Create the config/install directory first:
$ mkdir -p /build/d8config/d8config_profile/config/install

# Export!
$ drush config-export --destination="/build/d8config/d8config_profile/config/install"

When I exported my configuration, there were ~215 files created. Try ls -1 | wc -l in the config/install directory to check for yourself.

top

The reason we’re gathered here today (a brief intermission)…

I hope you are finding this tutorial useful - and also sensible. When I started writing this blog post, I hadn’t realised it would cover quite so much ground. The key thing I thought I would be covering was Drupal 8’s configuration management. It was something I was very excited about, and I still am. To demonstrate some of the fun I’ve had with it is still the central point of this blog. All of the previous steps to get to this point were fun too, don’t get me wrong. From my point of view, there were no surprises.

Spoiler alert

Configuration management, on the other hand - this is true drama. Taking an existing shared development site and recreating it locally using Drush make and a basic profile (without the included config/install directory) is just a trivial soap opera. If you want real fun, visit the configuration-syncing aspect, armed only with knowledge of prior versions of Drupal and don’t RTFM.

Do RTFM

No, really. Do it.

The secret sauce in this recipe is…

After doing the export of the configuration in the previous section, I finally started running into the problems that I faced during my real world project - the project mentioned at the beginning of this post. Importing the configuration repeatedly and consistently failed with quite noisy and complex stack trace errors which were difficult to make sense of. Did I mention that perhaps I should have read the manual?

We need to do two things to make the configuration files usable in this tutorial before committing:

# Within the d8config_profile/config/install directory:
$ rm core.extension.yml update.settings.yml
$ find ./ -type f -exec sed -i '/^uuid: /d' {} \;

The removal of those two files was found to be required thanks to reading this and this. At this stage, I can confirm these were the only two files necessary for removal, and perhaps as Drupal 8’s configuration management becomes more sophisticated, this will not be necessary. The second command will recursively remove the lines with the uuid key/value pairs in all files.

top

Packaging it all up and running with it.

We’ve done all the preparation, and now need to make some small tweaks and commit them so our colleagues can start where we’ve left off. To do so we need to:

  1. add the profile to the makefile;
  2. commit our changes to the d8config_profile repo;
  3. tweak the config.yml file in the d8config-vm repo, to use our makefile and profile during provisioning.

To have the profile be installed by the makefile, add this to the bottom of d8config.make (in the D8config profile):

d8config_profile:
  type: profile
  download:
    type: git
    url: [email protected]:siliconmeadow/d8config_profile.git
    working-copy: true

I’ve committed the changes to the D8Config profile and tagged it as CG02.

Then the last change to make before testing our solution is to tweak the config.yml in the D8config VM repo. Three lines need changing:

# Change the drush_makefile_path:
drush_makefile_path: "/build/d8config/d8config_profile/d8config.make"

# Change the drupal_install_profile:
drupal_install_profile: d8config_profile

# Remove devel from the drupal_enable_modules array:
drupal_enable_modules: []

As you can see, the changes to the vagrant project are all about the profile.

With both the D8Config VM and the D8Config profile in adjacent folders, and confident that this is all going to work, from the host do:

# From the d8config-vm directory
$ vagrant destroy
# Type 'y' when prompted.

# Go!
$ vagrant up

Once the provisioning is complete, you should be able to check that the site is functioning at http://d8config.dev. Once there, check the presence of the custom content types, taxonomy, expected themes, placement of blocks, etc.

top

Summary and conclusion

The steps we’ve taken in this tutorial have given us an opportunity to look at the latest version of Drupal VM, build a quick-and-dirty prototype in Drupal 8 and make a profile which our colleagues can use to collaborate with us. I’ve pointed out some gotchas and in particular some things you will want to consider regarding exporting and importing Drupal 8 configuration settings.

There are more questions raised as well. For example, why not simply keep the d8config.make file in the d8config-vm repo? And what about the other ways people use Drupal VM in their workflow - for example here and here? Why not use the minimal profile when starting a protoype, and save the step of deleting content types?

Questions or comments? Please let me know. And next time we’ll just use Docker, shall we?

May 18 2016
May 18
teaser image for blog post

In this blog post I'll discuss some methods of ensuring that your software is kept up to date, and some recent examples of why you should consider security to be among your top priorities instead of viewing it as an inconvenience or hassle.

Critics often attack the stability and security of Open Source due to the frequent releases and updates as projects evolve through constant contributions to their code from the community. They claim that open source requires too many patches to stay secure, and too much maintenance as a result.

This is easily countered with the explanation that by having so many individuals working with the source code of these projects, and so many eyes on them, potential vulnerabilities and bugs are uncovered much faster than with programs built on proprietary code. It is difficult for maintainers to ignore or delay the release of updates and patches with so much public pressure and visibility, and this should be seen as a positive thing.

The reality is that achieving a secure open source infrastructure and application environment requires much the same approach as with commercial software. The same principles apply, with only the implementation details differing. The most prominent difference is the transparency that exists with open source software.

Making Headlines

Open Source software often makes headlines when it is blamed for security breaches or data loss. The most recent high profile example would be the Mossack Fonseca “Panama Papers” breach, which was blamed on either WordPress or Drupal. It would be more accurate to blame the firm itself for having poor security practices, including severely outdated software throughout the company and a lack of even basic encryption.

Mossack Fonseca were using an outdated version of Drupal: 7.23. This version was released on 8 Aug 2013, almost 3 years ago as of the time of writing. That version has at least 25 known vulnerabilities. Several of these are incredibly serious, and were responsible for the infamous “Drupalgeddon” event which led to many sites being remotely exploited. Drupal.org warned users that “anyone running anything below version 7.32 within seven hours of its release should have assumed they’d been hacked”.

Protection by Automation

Probably the most effective way to keep your software updated is to automate and enforce the process. Don’t leave it in the hands of users or clients to apply or approve updates. The complexity of this will vary depending on what you need to update, and how, but it can often be as simple as enabling the built-in automatic updates that your software may already provide, or scheduling a daily command to apply any outstanding updates.

Once you've got it automated (the easy part) you will want to think about testing these changes before they hit production systems. Depending on the impact of the security exploits that you're patching, it may be more important to install updates even without complete testing; a broken site is often better than a vulnerable site! You may not have an automated way of testing every payment permutation on a large e-commerce site, for example, but that should not dissuade you from applying a critical update that exposes credit card data. Just be sure you aren't using this rationale as an excuse to avoid implementing automated testing.

The simple way

As a very common example of how simple the application of high priority updates can be, most Linux distributions will have a tried and tested method of automatically deploying security updates through their package management systems. For example, Ubuntu/Debian have the unattended-upgrades package, and Redhat-based systems have yum-cron. At the very least you will be able to schedule the system’s package manager to perform nightly updates yourself. This will cover the OS itself as well as any officially supported software that you have installed through the package manager. This means that you probably already have a reliable method of updating 95% of the open source software that you're using with minimal effort, and potentially any third-party software if you're installing from a compatible software repository. Consult the documentation for your Linux distro (or Google!) to find out how to enable this, and you can ensure that you are applying updates as soon as they are made available.

The complex way

For larger or more complex infrastructure where you may be using configuration management software (such as Ansible, Chef, or Puppet) to enforce state and install packages, you have more options. Config management software will allow you to apply updates to your test systems first, and report back on any immediate issues applying these updates. If a service fails to restart, a service does not respond on the expected port after the upgrade, or anything goes wrong, this should be enough to stop these changes reaching production until the situation is resolved. This is the same process that you should already be following for all config changes or package upgrades, so no special measures should be necessary.

The decision to make security updates a separate scheduled task, or to implement them directly in your config management process will depend on the implementation, and it would be impossible to cover every possible method here.

Risk Management

Automatically upgrading software packages on production systems is not without risks. Many of these can be mitigated with a good workflow for applying changes (of any kind) to your servers, and confidence can be added with automated testing.

Risks

  • You need to have backups of your configuration files, or be enforcing them with config management software. You may lose custom configuration files if they are not flagged correctly in the package, or the package manager does not behave how you expect when updating the software.
  • Changes to base packages like openssl, the kernel, or system libraries can have an unexpected effect on many other packages.
  • There may be bugs or regressions in the new version. Performance may be degraded.
  • Automatic updates may not complete the entire process needed to make the system secure. For example, a kernel update will generally require a reboot, or multiple services may need to be restarted. If this does not happen as part of the process, you may still be running unsafe versions of the software despite installing upgrades.

Reasons to apply updates automatically

  • The server is not critical and occasional unplanned outages are acceptable.
  • You are unlikely to apply updates manually to this server.
  • You have a way to recover the machine if remote access via SSH becomes unavailable.
  • You have full backups of any data on the machine, or no important data is stored on it.

Reasons to NOT apply updates automatically

  • The server provides a critical service and has no failover in place, and you cannot risk unplanned outages.
  • You have custom software installed manually, or complex version dependencies that may be broken during upgrades. This includes custom kernels or kernel modules.
  • You need to follow a strict change control process on this environment.

Reboot Often

Most update systems will also be able to automatically reboot for you if this is required (such as a kernel update), and you should not be afraid of this or delay it unless you're running a critical system. If you are running a critical system, you should already have a method of hot-patching the affected systems, performing rolling/staggered reboots behind a load-balancer, or some other cloud wizardry that does not interrupt service.

Decide on a maintenance window and schedule your update system to use it whenever a reboot is required. Have monitoring in place to alert you in the event of failures, and schedule reboots within business hours wherever possible.

Drupal and Other Web-based Applications

Most web-based CMS software such as Drupal and Wordpress offer automated updates, or at least notifications. Drupal security updates for both core and contributed modules can be applied by Drush, which can in turn be scheduled easily using cron or a task-runner like Jenkins. This may not be a solution if you follow anything but the most basic of deployment workflows and/or rely on a version control system such as Git for your development (which is where these updates should go, not direct to the web server). Having your production site automatically update itself will mean that it no longer matches what you deployed, nor what is in your version control repository, and it will be bypassing any CI/testing that you have in place. It is still an option worth considering if you lack all of these things or just want to guarantee that your public-facing site is getting patches as a priority over all else.

You could make this approach work by serving the Git repo as the document root, updating Drupal automatically (using Drush in 'security only' upgrade mode on cron), then committing those changes (which should not conflict with your custom code/modules) back to the repo. Not ideal, but better than having exploitable security holes on your live servers.

If your Linux distribution (or the CMS maintainers themselves) provide the web-based software as a package, and security updates are applied to it regularly, you may even consider using their version of the application. You can treat Drupal as just another piece of software in the stack, and the only thing that you're committing to version control and deploying to servers is any custom modules to be layered on top of the (presumably) secure version provided as part of the OS.

Some options that may fit better into the common CI/Git workflows might be:

  • Detect, apply, and test security patches off-site on a dedicated server or container. If successful, commit them back to version control to your dev/integration branch.
  • Check for security updates as part of your CI system. Apply, test and merge any updates into your integration branch.

Third-party Drupal Modules (contrib)

Due to the nature of contrib Drupal modules (ie, those provided by the community) it can be difficult to update them without also bringing in other changes, such as new features (and bugs!) that the author may have introduced since the version you are currently running. Best practice would be to try to keep all of the contrib that the site uses up to date where possible, and to treat this with the same care and testing as you would updates to Drupal itself. Contrib modules often receive important bug fixes and performance improvements that you may be missing out on if you only ever update in the event of a security announcements.

Summary

  • Ensure that updates are coming from a trusted and secure (SSL) source, such as your Linux distribution's packaging repositories or the official Git repositories for your software.
  • If you do not trust the security updates enough to apply them automatically, you should probably not be using the software in the first place.
  • Ensure that you are alerted in the event of any failures in your automation.
  • Subscribe to relevant security mailing lists, RSS feeds, and user groups for your software.
  • Prove to yourself and your customers that your update method is reliable.
  • Do not allow your users, client, or boss to postpone or delay security updates without an incredibly good reason.

You are putting your faith in the maintainer's' ability to provide timely updates that will not break your systems when applied. This is a risk you will have to take if you automate the process, but it can be mitigated through automated or manual testing.

Leave It All To Somebody Else

If all this feels like too much responsibility and hard work then it’s something Ixis have many years of experience in. We have dedicated infrastructure and application support teams to keep your systems secure and updated. Get in touch to see how we can ensure you're secure now and in the future whilst enjoying the use and benefits of open source software.

Dec 11 2015
Dec 11

Drupal 8 logo

The Queue API in Drupal allows us to handle a number of tasks at a later stage. What this means is that we can place items into a queue which will run some time in the future and process each individual item at that point and at least once. Usually, this happens on CRON runs, and Drupal 8 allows for a quick set up for cronjob based queues. It doesn’t necessarily have to be CRON, however.

In this article, we will look at using the Queue API in Drupal 8 by exploring two simple examples. The first will see the queue triggered by Cron while the second will allow us to manually do so ourselves. However, the actual processing will be handled by a similar worker. If you want to follow along, clone this git repository where you can find the npq module we will write in this article.

The module we’ll work with is called Node Publisher Queue and it automatically adds newly created nodes that are saved unpublished to a queue to be published later on. We will see how later on can be the next CRON run or a manual action triggered by the site’s administrator. First, let’s understand some basic concepts about queues in Drupal 8.

The theory

There are a few components that make up the Queue API in Drupal 8.

The most important role in this API is played by the QueueInterface implementation which represents the queue. The default queue type Drupal 8 ships with is currently the DatabaseQueue which is a type of reliable queue that makes sure all its items are processed at least once and in their original order (FIFO). This is in contrast to unreliable queues which only do their best to achieve this (something for which valid use cases do exist).

The typical role of the queue object is to create items, later claim them from the queue and delete them when they have been processed. In addition, it can release items if processing is either not finished or another worker needs to process them again before deletion.

The QueueInterface implementation is instantiated with the help of a general QueueFactory. In the case of the DatabaseQueue, the former uses the DatabaseQueueFactory as well. Queues also need to be created before they can be used. However, the DatabaseQueue is already created when Drupal is first installed so no additional setup is required.

The Queue Workers are responsible for processing queue items as they receive them. In Drupal 8 these are QueueWorker plugins that implement the QueueWorkerInterface. Using the QueueWorkerManager, we create instances of these plugins and process the items whenever the queue needs to be run.

The Node Publish Queue module

Now that we’ve covered the basic concepts of the Queue API in Drupal 8, let’s get our hands dirty and create the functionality described in the introduction. Our npq.info.yml file can be simple:

name: Node Publish Queue
description: Demo module illustrating the Queue API in Drupal 8
core: 8.x
type: module

Queue item creation

Inside the npq.module file we take care of the logic for creating queue items whenever a node is saved and not published:

use Drupal\Core\Entity\EntityInterface;
use Drupal\Core\Queue\QueueFactory;
use Drupal\Core\Queue\QueueInterface;

/**
 * Implements hook_entity_insert().
 */
function npq_entity_insert(EntityInterface $entity) {
  if ($entity->getEntityTypeId() !== 'node') {
    return;
  }

  if ($entity->isPublished()) {
    return;
  }

  /** @var QueueFactory $queue_factory */
  $queue_factory = \Drupal::service('queue');
  /** @var QueueInterface $queue */
  $queue = $queue_factory->get('cron_node_publisher');
  $item = new \stdClass();
  $item->nid = $entity->id();
  $queue->createItem($item);
}

Inside this basic hook_entity_insert() implementation we do a very simple task. We first retrieve the QueueFactoryInterface object from the service container and use it to get a queue called cron_node_publisher. If we track things down, we notice that the get() method on the DatabaseQueueFactory simply creates a new DatabaseQueue instance with the name we pass to it.

Lastly, we create a small PHP object containing the node ID and create an item in the queue with that data. Simple.

The CRON queue worker

Next, let’s create a QueueWorker plugin that will process the queue items whenever Cron is run. However, because we know that we will also need one for manual processing that does the same thing, we will add most of the logic in a base abstract class. So inside the Plugin/QueueWorker namespace of our module we can have the NodePublishBase class:

/**
 * @file
 * Contains Drupal\npq\Plugin\QueueWorker\NodePublishBase.php
 */

namespace Drupal\npq\Plugin\QueueWorker;

use Drupal\Core\Entity\EntityStorageInterface;
use Drupal\Core\Plugin\ContainerFactoryPluginInterface;
use Drupal\Core\Queue\QueueWorkerBase;
use Drupal\node\NodeInterface;
use Symfony\Component\DependencyInjection\ContainerInterface;


/**
 * Provides base functionality for the NodePublish Queue Workers.
 */
abstract class NodePublishBase extends QueueWorkerBase implements ContainerFactoryPluginInterface {

  /**
   * The node storage.
   *
   * @var \Drupal\Core\Entity\EntityStorageInterface
   */
  protected $nodeStorage;

  /**
   * Creates a new NodePublishBase object.
   *
   * @param \Drupal\Core\Entity\EntityStorageInterface $node_storage
   *   The node storage.
   */
  public function __construct(EntityStorageInterface $node_storage) {
    $this->nodeStorage = $node_storage;
  }

  /**
   * {@inheritdoc}
   */
  public static function create(ContainerInterface $container, array $configuration, $plugin_id, $plugin_definition) {
    return new static(
      $container->get('entity.manager')->getStorage('node')
    );
  }

  /**
   * Publishes a node.
   *
   * @param NodeInterface $node
   * @return int
   */
  protected function publishNode($node) {
    $node->setPublished(TRUE);
    return $node->save();
  }

  /**
   * {@inheritdoc}
   */
  public function processItem($data) {
    /** @var NodeInterface $node */
    $node = $this->nodeStorage->load($data->nid);
    if (!$node->isPublished() && $node instanceof NodeInterface) {
      return $this->publishNode($node);
    }
  }
}

Right off the bat we can see that we are using dependency injection to inject the NodeStorage into our class. For more information about dependency injection and the service container, feel free to check out my article on the topic.

In this base class we have two methods: publishNode() and the obligatory processItem(). The former publishes and saves a node that is passed to it. The latter loads the node using the node ID contained in the $data object and publishes it if it’s unpublished.

Now, let’s create a CronNodePublisher plugin that will use this logic on Cron runs:

namespace Drupal\npq\Plugin\QueueWorker;

/**
 * A Node Publisher that publishes nodes on CRON run.
 *
 * @QueueWorker(
 *   id = "cron_node_publisher",
 *   title = @Translation("Cron Node Publisher"),
 *   cron = {"time" = 10}
 * )
 */
class CronNodePublisher extends NodePublishBase {}

And that is all. We don’t need any other logic than what already is in our base class. Notice that, in the annotation, we are telling Drupal that this worker needs to be used by Cron to process as many items as it can within 10 seconds. How does this happen?

Whenever Cron runs, it uses the QueueWorkerManager to load all its plugin definitions. Then, if any of them have the cron key in their annotation, a Queue with the same name as the ID of the worker is loaded for processing. Lastly, each item in the queue is claimed and processed by the worker until the specified time has elapsed.

If we now save an unpublished node, it will most likely become published at the next Cron run.

The manual worker

Let’s create also the possibility for the Queue to be processed manually. First, let’s adapt the hook_entity_insert() implementation from before and change this line:

$queue = $queue_factory->get('cron_node_publisher');

to this:

$queue = $queue_factory->get('manual_node_publisher');

You can of course provide an admin screen for configuring which type of node publisher the application should use.

Second, let’s create our ManualNodePublisher plugin:

namespace Drupal\npq\Plugin\QueueWorker;

/**
 * A Node Publisher that publishes nodes via a manual action triggered by an admin.
 *
 * @QueueWorker(
 *   id = "manual_node_publisher",
 *   title = @Translation("Manual Node Publisher"),
 * )
 */
class ManualNodePublisher extends NodePublishBase {}

This is almost the same as with the CRON example but without the cron key.

Third, let’s create a form where we can see how many items are in the manual_node_publisher queue and process them all by the press of a button. Inside npq.routing.yml in the module root folder:

demo.form:
  path: '/npq'
  defaults:
    _form: '\Drupal\npq\Form\NodePublisherQueueForm'
    _title: 'Node Publisher'
  requirements:
    _permission: 'administer site configuration'

We define a path at /npq which should use the specified form that lives in that namespace and that we can define as such:

/**
 * @file
 * Contains \Drupal\npq\Form\NodePublisherQueueForm.
 */

namespace Drupal\npq\Form;

use Drupal\Core\Form\FormBase;
use Drupal\Core\Form\FormStateInterface;
use Drupal\Core\Queue\QueueFactory;
use Drupal\Core\Queue\QueueInterface;
use Drupal\Core\Queue\QueueWorkerInterface;
use Drupal\Core\Queue\QueueWorkerManagerInterface;
use Drupal\Core\Queue\SuspendQueueException;
use Symfony\Component\DependencyInjection\ContainerInterface;

class NodePublisherQueueForm extends FormBase {

  /**
   * @var QueueFactory
   */
  protected $queueFactory;

  /**
   * @var QueueWorkerManagerInterface
   */
  protected $queueManager;


  /**
   * {@inheritdoc}
   */
  public function __construct(QueueFactory $queue, QueueWorkerManagerInterface $queue_manager) {
    $this->queueFactory = $queue;
    $this->queueManager = $queue_manager;
  }

  /**
   * {@inheritdoc}
   */
  public static function create(ContainerInterface $container) {
    return new static(
      $container->get('queue'),
      $container->get('plugin.manager.queue_worker')
    );
  }
  
  /**
   * {@inheritdoc}.
   */
  public function getFormId() {
    return 'demo_form';
  }
  
  /**
   * {@inheritdoc}.
   */
  public function buildForm(array $form, FormStateInterface $form_state) {
    /** @var QueueInterface $queue */
    $queue = $this->queueFactory->get('node_publisher');

    $form['help'] = array(
      '#type' => 'markup',
      '#markup' => $this->t('Submitting this form will process the Manual Queue which contains @number items.', array('@number' => $queue->numberOfItems())),
    );
    $form['actions']['#type'] = 'actions';
    $form['actions']['submit'] = array(
      '#type' => 'submit',
      '#value' => $this->t('Process queue'),
      '#button_type' => 'primary',
    );
    
    return $form;
  }
  
  /**
   * {@inheritdoc}
   */
  public function submitForm(array &$form, FormStateInterface $form_state) {
    /** @var QueueInterface $queue */
    $queue = $this->queueFactory->get('manual_node_publisher');
    /** @var QueueWorkerInterface $queue_worker */
    $queue_worker = $this->queueManager->createInstance('manual_node_publisher');

    while($item = $queue->claimItem()) {
      try {
        $queue_worker->processItem($item->data);
        $queue->deleteItem($item);
      }
      catch (SuspendQueueException $e) {
        $queue->releaseItem($item);
        break;
      }
      catch (\Exception $e) {
        watchdog_exception('npq', $e);
      }
    }
  }
}


We are again using dependency injection to inject the QueueFactory and the manager for QueueWorker plugins. Inside buildForm() we are creating a basic form structure and using the numberOfItems() method on the queue to tell the user how many items they are about to process. And finally, inside the submitForm() method we take care of the processing. But how do we do that?

First, we load the Queue and instantiate a Queue worker (in both cases we use the manual_node_publisher id). Then we run a while loop until all the items have been processed. The claimItem() method is responsible for blocking a queue item from being claimed by another queue and returning it for processing. After it gets processed by the worker, we delete it. In the next iteration, the next item is returned and on like this until no items are left.

Although we have not used it, the SuspendQueueException is meant to indicate that during the processing of the item, the worker found a problem that would most likely make all other items in the queue fail as well. And for this reason it is pointless to continue to the next item so we break out of the loop. However, we also release the item so that when we try again later, the item is available. Other exceptions are also caught and logged to the watchdog.

Now if we create a couple of nodes and don’t publish them, we’ll see their count inside the message if we navigate to /npq. By clicking the submit button we process (publish) them all one by one.

This has been a demonstration example only. It’s always important to take into account the potential load of processing a large number of items and either limit that so your request doesn’t time out or use the Batch API to split them into multiple requests.

Conclusion

In this article we’ve looked at the Queue API in Drupal 8. We’ve learned some basic concepts about how it is built and how it works, but we’ve also seen some examples of how we can work with it. Namely, we’ve played with two use cases by which we can publish unpublished nodes either during Cron runs or manually via an action executed by the user.

Have you tried out the Queue API in Drupal 8? Let us know how it went!

Daniel Sipos

Meet the author

Daniel Sipos is a Drupal developer who lives in Brussels, Belgium. He works professionally with Drupal but likes to use other PHP frameworks and technologies as well. He runs webomelette.com, a Drupal blog where he writes articles and tutorials about Drupal development, theming and site building.
May 29 2015
May 29

In the past four years there has been an explosion of new technologies in front-end development. We are inundated with new projects like Bower, Cucumber, Behat and KSS. It is a lot to take in. At the past two Drupalcons there have been sessions about this overload (Austin 2014, My Brain is Full: The state of Front-end developement). Essentially those sessions are asking “What the hell is going on?”

As John Albin pointed out in his 2015 DrupalCon presentation, Style Guide Driven Development: All hail the robot overlords!, being a front-end developer now means constantly learning. The pace at which front-end development is evolving is too much to keep up with. There’s not enough time to read about the latest tools and techniques, let alone implement them! What we need is someone — or something — to do most of the work for us, so we can focus on better development.

Everyone is describing the one little piece they’ve created, but don’t explain (or even reference!) the larger concepts of how all of these elements link together.

Frank Chimero, July 2014 Designer News AMA

So what big picture will help us understand today’s new front-end technologies? I’ve been doing front-end web development since 2007 (when building layouts with tables was the thing to do). Recently I realized that the current front-end flux is easily explainable as the beginning of a momental shift in web development: web development is embracing agile development. And the entire way we build websites is being turned inside out.

How does web development do agile? It creates styleguide-driven development (SGDD).

Styleguide-Driven Development (SGDD)

Styleguide-Driven Development (SGDD) is a practice that encourages the separation of UX, design & frontend from backend concerns. This is achieved by developing the UI separately in a styleguide. By separating the UI and backend tasks so they don’t rely on each other, it allows teams to iterate fast on prototypes and designs without having to make changes to the backend. With careful planning they should plug-and-play together nicely.

Styleguide-Driven Development isn’t just limited for big teams working on large applications, but the core concepts of developing UI elements separately in a style guide (living or static) can still benefit a sole developer working on a single page app.

The only requirements for styleguide-driven development are:

Front-end Technology Categories

We can categorize new front-end projects into just three categories:

  • Front-end Performance (make shit faster)
    The front-end is where you see most of the lag while browsing websites, so it is critical to focus in this area.
  • Components (make shit modular)
    Automation to ensure what you build today doesn’t break what you built yesterday.
  • Continuous Integration (Automate shit)
    A way of bundling reusable chunks of HTML, CSS, JavaScript and other assets.
NodeJS is an excellent example of a project that integrates all three. It’s web service is all about performance, incorporates packages that’s all about components and you build your project with a package.json to help automate things.

If you understand those three concepts, you can make sense of any of today’s new front-end technologies. There may be hundreds of new projects, but they are just different programming languages, different platforms, different projects and different APIs implementing one or more of those three ideas.

A core concept of agile development is reducing risk by controlling and minimizing your risk. One of the tools to prevent risk of regressions and minimize the risk of refactoring is continuous integration. While back-end developers and devops have been working on this for a while now, we are only now starting to see it in the front-end as those developers slowly get training in agile.

And to minimize complexity and risk of failure, front-end developers have started to develop components of HTML, CSS, and JS that are reusable and maintainable. Bootstrap? Foundation? Those are just pre-made reusable component libraries. But custom-designed websites and apps are also using the same technique while building custom component libraries.

Even as agile creeps into all the layers of web development, we still need a grand-unifying process that makes the new agile web development possible, unifying back-end, front-end, design, everything. Surprisingly, the once-derided style guide is the key.

Back in the day, website designs were always accompanied by style guides. Even if they weren’t out-of-date before they were delivered (“Ignore that part… I didn’t have time to update it after client feedback”), they always became out-of-date quickly. Since they were separate documents, they didn’t get maintained to reflect the current state of the website and became orphaned documents. But thanks to agile’s continuous integration, style guides can now be auto-generated from the website’s own source code, ensuring that the style guide and the website never get out of sync.

Component-based Web Designs

Patterns are big business in IT. You can’t get far in OO programming before hitting a book about design patterns, stressing the need for standardized solutions to particular problems. When it comes to web development though, design patterns never really hit off. Maybe because of the chaotic nature of the so-called web standards, maybe because we as an industry are just not ready for them yet. That’s no reason to ignore their potential though.

Large web applications generally have a lot of CSS files and often have many developers working on those files simultaneously. With the advent of so many frameworks, guidelines, tools, and methodologies (OOCSS, SMACSS, BEM, etc.), developers need a CSS architecture that is maintainable, manageable, and scalable. The answer is web components.

Web components are a collection of standards that are working their way through the W3C. They allow us to bundle up markup and styles into reusable HTML elements that are truly encapsulated. What this means is we need to start thinking about component-based CSS development. You’ll hear web development components go by many names:

  • “Object” in OOCSS
  • “Module” in SMACSS
  • “Block” in BEM’s Block-Modifier
  • “Web component” in HTML

Once components are planned, it’s easy to write a structure that handles all requirements. Components should be built to be:

  1. Applied to a loose collection of HTML elements
  2. Repeatable
    (even if never repeated)
  3. Specific
    Replace CSS specificity with specific names
  4. Self containing
    Styles do not bleed onto anything else
  5. Nest-able

This especially helps when building a site with user contributed content. WordPress, Drupal, Joomla and other CMS‘s use WYSIWYG editors where often times you’ll have no control of the elements or classes used. Having a base component that set’s default styles for anchors, headlines, paragraphs and others will make it a lot easier to style contributed content.

CSS Design Components

When building projects, think of everything as a component. A great example is the SMACSS approach:

  1. Base Components
  2. Layout Components
  3. Components
    • Component (BEM) (.flower)
    • Element (BEM) (.flower__petals)
    • Modifier (BEM) (.flower--tulip)
    • State (.flower:hover, .flower.is-pollinating, media queries, print styles)
    • Skin (.is-night .flower, should effect many components)
Drupal 8 Component-based CSS Approach

Drupal 8 uses the same component-based CSS design pattern (https://www.drupal.org/node/1886770):

.the-component
.the-component--modifier
.the-component__an-element
.the-component--modifier__an-element
.the-component.is-state
.the-component:hover
@media all { .the-component {} }
.the-skin .the-component

Remember not to make it complicated. Never build a class like .channel-tab__guide__upcoming-video__info__time. Avoid nesting components, elements and modifiers all in one. Also don’t try to be a perfectionist. You’ll end up spending a ton of time trying to decide what to name something instead of developing it.

Sucking at something is the first step to becoming sorta good at something.

The “Fugly” Selector Hack for Drupal

Have trouble inserting a class on an element in Drupal? This is especially a problem with links. Though Drupal offers many hooks to drill down to most containers, it’s sometimes hard to add a class to minor elements like the anchor tags.

// Pretty classes.
%feature__title-link {

}

// Ugly classes.
.feature__title a {
  &:link,
  &:visited {
    @extend %feature__title-link;
  }
  &:hover,
  &:focus {
    @extend %feature__title-link-is-hover;
  }
}

In the example above, the anchor tag is the container I don’t have access to. Instead I would have preferred to add the class .feature__title-link directly to the anchor tag. Unfortunately, sometimes you’ll be forced to apply this kind of selector hack when working with Drupal.

Continuous Integration with Automated Style Guides

Style guides promote a systematic approach to composing layouts, which used to be just a task within the user interface development process. Incorporating style guides into the development process places importance on the tools used to build the component catalogue.

With an automated style guide documenting your custom component library, building a website becomes straight-forward.

  1. Pick a feature to begin development on.
  2. Look through the existing style guide to see if it already contains a design component that you can use as-is or tweak.
  3. If the new feature requires a new component, designers and front-end developers should work together to design and implement it.
  4. Repeat.

Immunize your code — don’t get sick!

The problem with style guides is keeping them up-to-date. It takes time, valuable time that could be used to develop your project. Due to the resources and time involved, many companies completely avoid building them as it becomes cost prohibitive. Doing this is like not immunizing your kid. Sooner, rather than later your project will become sick. Symptoms include bloat, spaghetti code and design inconsistencies. As time goes on, the symptoms will become overwhelming and you’ll feel the need to put your project down and start from scratch.

That’s where styleguide-driven development comes into play. It’s ability for continuous integration or automation will save you time to focus on actual development.

Styleguide-Driven Development

Knyle Style Sheets (KSS) to the rescue!

As you write your CSS, software like KSS can automatically generate style guides for you with styleguide-driven development. KSS (https://github.com/kss-node/kss-node) is a popular one due to it’s simplicity. It’s basically a spec to write CSS comments so the parser can automatically generate a style guide for you.

/*
Button

Your standard button suitable for clicking.

:hover     - Highlights when hovering.
.shiny     - Do not press this big. shiny, red button.

Markup: index.html (optional)

Style guide: components.button (defines hierarchy)
*/
.button {

}
.button.shiny {

}

Used in conjuction with a task runner like Grunt, it’ll become a powerful allie in your web development workflow saving you a ton of time. With one command you can:

  1. Build CSS from Sass/LESS/etc.
  2. Linting for CSS/Sass/JavaScript
  3. Minify and concatenate scripts/images
  4. Build a updated style guide
  5. Visual regression testing
  6. Live reload

Take it further and create a gem file to install and load your project dependencies (Sass, Susy, Grunt, etc.) This makes it easier for new developers to get the project setup on their locals.

Additional Resources

Like this:

Like Loading...

Author: Ben Marshall

Red Bull Addict, Self-Proclaimed Grill Master, Entrepreneur, Workaholic, Front End Engineer, SEO/SM Strategist, Web Developer, Blogger

May 01 2015
May 01

In this article we are going to look at automated testing in Drupal 8. More specifically, we are going to write a few integration tests for some of the business logic we wrote in the previous Sitepoint articles on Drupal 8 module development. You can find the latest version of that code in this repository along with the tests we write today.

Drupal 8 logo

But before doing that, we will talk a bit about what kinds of tests we can write in Drupal 8 and how they actually work.

Simpletest (Testing)

Simpletest is the Drupal specific testing framework. For Drupal 6 it was a contributed module but since Drupal 7 it has been part of the core package. Simpletest is now an integral part of Drupal core development, allowing for safe API modifications due to an extensive codebase test coverage.

Right off the bat I will mention the authoritative documentation page for Drupal testing with Simpletest. There you can find a hub of information related to how Simpletest works, how you can write tests for it, what API methods you can use, etc.

By default, the Simpletest module that comes with Drupal core is not enabled so we will have to do that ourselves if we want to run tests. It can be found on the Extend page named as Testing.

Once that is done, we can head to admin/config/development/testing and see all the tests currently available for the site. These include both core and contrib module tests. At the very bottom, there is also the Clean environment button that we can use if any of our tests quit unexpectedly and there are some remaining test tables in your database.

How does Simpletest work?

When we run a test written for Simpletest, the latter uses the existing codebase and instructions found in the test to create a separate Drupal environment in which the test can run. This means adding additional tables to the database (prefixed by simpletest_) and test data that are used to replicate the site instance.

Depending on the type of test we are running and what it contains, the nature of this replication can differ. In other words, the environment can have different data and core functionality depending on the test itself.

What kinds of tests are there in Drupal 8?

There are two main types of tests that we can write for Drupal 8: unit tests using PHPUnit (which is in core now) and functional tests (using Simpletest). However, the latter can also be split into two different kinds: web tests (which require web output) and kernel tests (which do not require web output). In this article we will practically cover only web tests because most of the functionality we wrote in the previous articles is manifested through output so that’s how we need to test it as well.

Writing any type of test starts by implementing a specific class and placing it inside the src/Tests folder of the module it tests. I also encourage you to read this documentation page that contains some more information on this topic as I do not want to duplicate it here.

Our tests

As I mentioned, in this article we will focus on providing test coverage for some of the business logic we created in the series on Drupal 8 module development. Although there is nothing complicated happening there, the demo module we built offers a good example for starting out our testing process as well. So let’s get started by first determining what we will test.

By looking at the demo module, we can delineate the following aspects we can test:

That’s pretty much it. The custom menu link we defined inside the demo.links.menu.yml could also be tested but that should already work out of the box so I prefer not to.

For the sake of brevity and the fact that we don’t have too much we need to test, I will include all of our testing methods into one single class. However, you should probably group yours into multiple classes depending on what they are actually responsible for.

Inside a file called DemoTest.php located in the src/Tests/ folder, we can start by adding the following:

<?php

namespace Drupal\demo\Tests;

use Drupal\simpletest\WebTestBase;

/**
 * Tests the Drupal 8 demo module functionality
 *
 * @group demo
 */
class DemoTest extends WebTestBase {

  /**
   * Modules to install.
   *
   * @var array
   */
  public static $modules = array('demo', 'node', 'block');

  /**
   * A simple user with 'access content' permission
   */
  private $user;

  /**
   * Perform any initial set up tasks that run before every test method
   */
  public function setUp() {
    parent::setUp();
    $this->user = $this->drupalCreateUser(array('access content'));
  }
}

Here we have a simple test class which for every test it runs, will enable the modules in the $modules property and create a new user stored inside the $user property (by virtue of running the setUp() method).

For our purposes, we need to enable the demo module because that is what we are testing, the block module because we have a custom block plugin we need to test and the node module because our logic uses the access content permission defined by this module. Additionally, the user is created just so we can make sure this permission is respected.

For the three bullet points we identified above, we will now create three test methods. Keep in mind that each needs to start with the prefix test in order for Simpletest to run them automatically.

Testing the page

We can start by testing the custom page callback:

/**
 * Tests that the 'demo/' path returns the right content
 */
public function testCustomPageExists() {
  $this->drupalLogin($this->user);

  $this->drupalGet('demo');
  $this->assertResponse(200);

  $demo_service = \Drupal::service('demo.demo_service');
  $this->assertText(sprintf('Hello %s!', $demo_service->getDemoValue()), 'Correct message is shown.');
}

And here is the code that does it.

First, we log in with the user we created in the setUp() method and then navigate to the demo path. Simpletest handles this navigation using its own internal browser. Next, we assert that the response of the last accessed page is 200. This validates that the page exists. However, this is not enough because we need to make sure the text rendered on the page is the one loaded from our service.

For this, we statically access the \Drupal class and load our service. Then we assert that the page outputs the hello message composed of the hardcoded string and the return value of the service’s getDemoValue() method. It’s probably a good idea to write a unit test for whatever logic happens inside the service but for our case this would be quite redundant.

And that’s it with the page related logic. We can go to the testing page on our site, find the newly created DemoTest and run it. If all is well, we should have all green and no fails.

drupal 8 automatated tests

Testing the form

For the form we have another method, albeit more meaty, that tests all the necessary logic:

/**
 * Tests the custom form
 */
public function testCustomFormWorks() {
  $this->drupalLogin($this->user);
  $this->drupalGet('demo/form');
  $this->assertResponse(200);

  $config = $this->config('demo.settings');
  $this->assertFieldByName('email', $config->get('demo.email_address'), 'The field was found with the correct value.');

  $this->drupalPostForm(NULL, array(
    'email' => '[email protected]'
  ), t('Save configuration'));
  $this->assertText('The configuration options have been saved.', 'The form was saved correctly.');

  $this->drupalGet('demo/form');
  $this->assertResponse(200);
  $this->assertFieldByName('email', '[email protected]', 'The field was found with the correct value.');

  $this->drupalPostForm('demo/form', array(
    'email' => '[email protected]'
  ), t('Save configuration'));
  $this->assertText('This is not a .com email address.', 'The form validation correctly failed.');

  $this->drupalGet('demo/form');
  $this->assertResponse(200);
  $this->assertNoFieldByName('email', '[email protected]', 'The field was found with the correct value.');
}

The first step is like before. We go to the form page and assert a successful response. Next, we want to test that the email form element exists and that its default value is the value found inside the default module configuration. For this we use the assertFieldByName() assertion.

Another aspect we need to test is that saving the form with a correct email address does what it is supposed to: save the email to configuration. So we use the drupalPostForm() method on the parent class to submit the form with a correct email and assert that a successful status message is printed on the page as a result. This proves that the form saved successfully but not necessarily that the new email was saved. So we redo the step we did earlier but this time assert that the default value of the email field is the new email address.

Finally, we need to also test that the form doesn’t submit with an incorrect email address. We do so again in two steps: test a form validation failure when submitting the form and that loading the form again will not have the incorrect email as the default value of the email field.

Testing the block

/**
 * Tests the functionality of the Demo block
 */
public function testDemoBlock() {
  $user = $this->drupalCreateUser(array('access content', 'administer blocks'));
  $this->drupalLogin($user);

  $block = array();
  $block['id'] = 'demo_block';
  $block['settings[label]'] = $this->randomMachineName(8);
  $block['theme'] = $this->config('system.theme')->get('default');
  $block['region'] = 'header';
  $edit = array(
    'settings[label]' => $block['settings[label]'],
    'id' => $block['id'],
    'region' => $block['region']
  );
  $this->drupalPostForm('admin/structure/block/add/' . $block['id'] . '/' . $block['theme'], $edit, t('Save block'));
  $this->assertText(t('The block configuration has been saved.'), 'Demo block created.');

  $this->drupalGet('');
  $this->assertText('Hello to no one', 'Default text is printed by the block.');

  $edit = array('settings[demo_block_settings]' => 'Test name');
  $this->drupalPostForm('admin/structure/block/manage/' . $block['id'], $edit, t('Save block'));
  $this->assertText(t('The block configuration has been saved.'), 'Demo block saved.');

  $this->drupalGet('');
  $this->assertText('Hello Test name!', 'Configured text is printed by the block.');
}

For this test we need another user that also has the permission to administer blocks. Then we create a new instance of our custom demo_block with no value inside the Who field and assert that a successful confirmation message is printed as a result. Next, we navigate to the front page and assert that our block shows up and displays the correct text: Hello to no one.

Lastly, we edit the block and specify a Test name inside the Who field and assert that saving the block configuration resulted in the presence of a successful confirmation message. And we close off by navigating back to the home page to assert that the block renders the correct text.

Conclusion

In this article, we’ve seen how simple it is to write some basic integration tests for our Drupal 8 business logic. It involves creating one or multiple class files which simply make use of a large collection of API methods and assertions to test the correct behavior of our code. I strongly recommend you give this a try and start testing your custom code early as possible in order to make it more stable and less prone to being broken later on when changes are made.

Additionally, don’t let yourself get discouraged by the slow process of writing tests. This is mostly only in the beginning until you are used to the APIs and you become as fluent as you are with the actual logic you are testing. I feel it’s important to also mention that this article presented a very high level overview of the testing ecosystem in Drupal 8 as well as kept the tests quite simple. I recommend a more in depth look into the topic going forward.

Apr 25 2015
Apr 25

drupal logoyeoman-logo

I’ve been hearing about Yeoman for quite some time now. Pretty much since the project took off, or soon after. As a tool born in the Javascript community, I came across this little gem when I was learning about Node.js and the different tools and frameworks available for it, either in my free time, or as part of my labs time at my company. Sadly, I didn’t really pay much attention to it. At the end of the day, Node.js was just something I was learning about, but not something I was going to be able to put in place or introduce in projects in the short term. Or, even if it was something I *could* do, it wasn’t in my plans, anyway.

The other reason why I didn’t look into it closer, was that I mistakenly thought it to be a tool only useful for Javascript developers. Some time ago I noticed that Yeoman was getting plenty of attention from other communities too, and in a closer look, I understood that it wasn’t a tool for Node.js, but instead, a tool built on top of Node.js, so I decided to give it a try and see if I could make something useful out of it for Drupal development.

Warming up…

So, what’s Yeoman then? It’s a code scaffolding tool. That is, it’s an utility to generate code for web apps. What’s the purpose of that? Well, the purpose is that developers save time by quickly generating the skeleton of the web apps they build, leaving more time for the important things, such as the most complex business logic of the app, integrations, testing, etc… In short: it’s a tool that should help developers deliver more quality in their apps. To get a better picture of what Yeoman can do, I’d point everyone at their site, which has some nice tutorials and a very good documentation for writing your own generators.

My plan was to write a few generators for the most common pieces of boilerplate code that I normally have to write in my projects. Unsurprisingly, I found that there are a few yeoman generators for Drupal already out there, so I thought I should review them and see if they’re of any use to me, before writing one that already exists. Yes, that can be a boring task if there are too many generators, but I was lucky that there aren’t *that* many for Drupal, so I just spent a couple of hours testing them and documenting my findings. Hopefully, this blog post will help other Drupal developers to find out in a matter of minutes whether the existing generators are useful for them or not. So, let’s get into it!

1.- Generator-drupalmodule

Github repository here. Creation date: Around 2 years ago.

Structure created:

module: |- drupalmodule.css |- drupalmodule.info |- drupalmodule.js |- drupalmodule.module |- package.json |-drupalmodule.css|-drupalmodule.info|-drupalmodule.js|-drupalmodule.module|-package.json

This one scaffolds a basic structure for a simple module. Needs bower and a package.json file to download dependencies, but not a problem anyway since you’ll probably have drush. Creation is a bit unintuitive: you need to create the module folder first, cd into it, then execute yo drupalmodule.

The generator asks if you want JS and CSS files, but it doesn’t even add functions to add them to the page. It’s a generic purpose generator, and doesn’t have anything that is not in module_builder already.

2.- Generator-drupal-module

Github repository here. Creation date: Around 2 months ago. Latest commit about 2 weeks ago.

Structure created:

module: |- templates (if hook_theme chosen). |- drupal_module.info |- drupal_module.install |- drupal_module.module |-templates(ifhook_themechosen).|-drupal_module.info|-drupal_module.install|-drupal_module.module

More neat than drupalmodule in the surface, but doesn’t do much more. It asks us if we want hook_theme(), hook_menu(), hook_permission() and hook_block_info / view implementations, which is nice, yet that doesn’t make it much of a gain compared to other simple scaffolding tools, like PhpStorm live templates. In contrast to the drupal-module generator, this one doesn’t ask us if we want a CSS or JS file.

3.- Generator-drupalentities

Github repository here. Creation date: 9 months ago. Latest commit about 6 months ago.

Structure created (“publisher” entity):

Views and license files are optional, based on the settings specified in the command-line.

module: |- views |- publisher.views.inc |- publisher_handler_delete_link_field.inc |- publisher_handler_edit_link_field.inc |- publisher_handler_link_field.inc |- publisher_handler_publisher_operations_field.inc |- LICENSE.txt |- publisher.admin.inc |- publisher.info |- publisher.install |- publisher.module |- publisher.tpl.php |- publisher-sample-data.tpl.php |- publisher_type.admin.inc     |-publisher.views.inc    |-publisher_handler_delete_link_field.inc    |-publisher_handler_edit_link_field.inc    |-publisher_handler_link_field.inc    |-publisher_handler_publisher_operations_field.inc|-LICENSE.txt|-publisher.admin.inc|-publisher.info|-publisher.install|-publisher.module|-publisher.tpl.php|-publisher-sample-data.tpl.php|-publisher_type.admin.inc

Generates a full drupal module for a custom entity, based on the structure proposed by the model module.

One issue I experienced is that if I select to “add bundles”, the Field API screen seems broken (doesn’t load). However, a general “fields” tab appears, but if you try to add a field, you get some errors and get redirected to a 404. So, bundles are offered on the plugin creation menu, but not really supported! Same for revisions. It’s asked on the command-line prompt, but doesn’t seem to do much. Not choosing bundles support, still lets you add bundles on the admin UI, and doesn’t seem to break anything, though.

In spite of the issues I had testing it (I didn’t bother much investigating what was the issue), it seems to me an useful generator. The only reason why I doubt I’ll be using it, is that it’s based, as mentioned, on the model project for Drupal, which is quite nice, but rather outdated now (4 years old), and doesn’t leverage some of the latest Entity API goodies. Also, I’ve developed some opinions and preferences around how to structure custom Entity Types, so starting to use the model approach would be, in a sense, a step backwards.

4.- Generator-ctools-layout

Github repository here. Creation date: 5 months ago. Latest commit about 14 days ago.

Structure created:

my_layout: |- admin_my_layout.css |- my_layout.css |- my_layout.inc |- my_layout.png |- my-layout.tpl.php my_layout:|-admin_my_layout.css|-my_layout.css|-my_layout.inc|-my_layout.png|-my-layout.tpl.php

Generates a ctools layout plugin folder structure, with all the files needed to get it to work out of the box. It makes no assumptions about how the content will be displayed, so there’s no styling by default (which is perfect), and it allows to specify as many regions as desired. It’s quite likely that I start using this in my projects. No cons or negative aspects to mention!

5.- Generator-gadget

Github repository here. Creation date: 1 month ago. Latest commit about 1 month ago.

This one, rather than a code generator for Drupal elements, is a yeoman generator to serve as an scaffolding tool for another repo from Phase 2. While I didn’t get to test it out, the grunt-drupal-tasks repo really looked interesting (check the features here), and I might try to give that a go, although I’m familiar with Gulp and not with Grunt. Long story short: very interesting project, but it’s not meant to scaffold any code for your drupal modules.

6.- Generator-drupalformat

Github repository here. Creation date: 6 months ago. Latest commit about 3 months ago.

Structure created:

drupalformat: |- includes |- js |- drupalformat.settings.js |- theme |- drupalformat.theme.inc |- drupalformat.tpl.php |- drupalformat.api.php |- drupalformat.info |- drupalformat.install |- drupalformat.module |- drupalformat.variable.inc |- generator.json |- LICENSE.txt drupalformat:|-includes  |-js    |-drupalformat.settings.js  |-drupalformat.theme.inc  |-drupalformat.tpl.php|-drupalformat.api.php|-drupalformat.info|-drupalformat.install|-drupalformat.module|-drupalformat.variable.inc|-generator.json|-LICENSE.txt

This one is very specific, tailored to provide views and field formatters for jQuery plugins, and it’s based on the owlcarousel module. It’s very useful if what you’re looking for is to easily integrate other jQuery plugins with your Drupal site. Very interesting generator, as it’s focused to scaffold the most repetitive parts for a very specific task, instead of trying to be a very generic solution that covers many things. You can see another great example leveraging this generator in Echo.co’s blog, for the jQuery Oridomi plugin. Not something that I have to pick up daily, but I’ll definitely have this plugin in mind if I have to integrate new Javascript libraries.

7.- Generator-drupal-component

Github repository here. Creation date: 6 months ago. Latest commit about 3 months ago.

Structure created:

drupal_component: |- ctools-content_types |- drupal_component.inc |- drupal_component.scss |- drupal_component.html.twig |- drupal_component.info |- drupal_component.js |- drupal_component.module |- drupal_component.tpl.php |- drupal_component.views.inc drupal_component:|-ctools-content_types  |-drupal_component.inc|-drupal_component.scss|-drupal_component.html.twig|-drupal_component.info|-drupal_component.js|-drupal_component.module|-drupal_component.tpl.php|-drupal_component.views.inc

I found this one rather peculiar. The boilerplate code it produces is rather basic, yet offers options such as creating a views style plugin by default, or a ctools content_type plugin. The good thing is that each component can be generated individually, which is rather convenient. The only issue that keeps me from using it is that, again, none of the components offer any options particularly advanced that could benefit from having an interactive tool like Yeoman (e.g: asking whether the ctools content type plugin will need one or more settings forms). For my particular case, I can generate all of these easily with PhpStorm live templates or template files easily.

Is that all, folks?

Ye… no! There are indeed a few more generators thought around drupal projects in the Yeoman registry (click here and search by “Drupal”). Some of them are very interesting, to do things such as:

However, I decided to leave those out of an in-depth review because, as interesting as they are, they cover several aspects of Drupal development, and people often has very specific preferences about how to structure a theme, for example, or what tools to use in order to create a Headless Drupal. 

Since the goal of this article was to give a bird’s eye view of what generators Drupal developers can use right now without changing anything in the way they work, I preferred to describe mainly the generators done around drupal modules, and more specific components. Hope this blog post has saved you some time. Expect to see a new one on this topic as soon as I’ve written my first Yeoman plugin.

Mar 28 2015
Mar 28
warcraft peasants building a town hall

Life before Code Generators

I love automation.

It’s something that lives deep inside me, and I always seem to seek it as hard as I can, even when dealing with the most trivial things. Yes, even those things for which automation might not even give huge benefits at all. That is, perhaps, because I just fit in the prototype of lazy developer who wants to reduce the work to do as much as possible, or simply because I like the challenge of grabbing a problem that requires some hours and several steps to get solved, and turn it into a trivial matter that can be done in less time, by anyone.

And that’s what I did a year and some months ago, when I wrote the Field Type Generator module for Drupal 7, which I’m releasing today. Depending on your background, and the processes and tools that you use for Drupal development, this might or might not be as great a tool as it is for me, but I can tell you that in my case, it’s a little gem that has saved me a lot of time over the last year.

There are some reasons why I decided to write the module:

  1. As mentioned, I love automation.
  2. This book, by Chad Fowler, convinced me that my love for automation was good, and that I should automate things. And that includes writing code generators.
  3. I was finding myself writing plenty of custom field types to be used in Drupal nodes and entities. All from scratch. Best scenario involved copying and pasting code to amend plenty of things afterwards.
  4. There wasn’t a clear, standard approach to the task in the company I was working for. Most devs were fairly new to the concept of custom field types, in fact, and would spend a fair amount of time writing them from scratch, too.

So what is it, then?

The Field Type Generator is a code generator to create custom field types based on Drupal’s Field API. It allows developers to create a custom field type, containing as many columns as they want, and download it as a fully-working module that they can drop in a drupal repository, ready to be installed and used in any Drupal site.

The module makes as little assumptions as possible about the purpose of the created custom field type, and it’s expected that developers will still have to add some custom code to the module created for their specific needs. So the goal is not to solve anyone’s problem in 5 minutes, but instead, solve about 3/4 of the task in 5 minutes, letting developers focus on their particular requirements (e.g: custom validation of data).

computer guy thumbs up gif

If he approves, you can’t just ignore it.

Why now?

Sure, I could have released it before, but I didn’t. Why? Because I wanted my company to benefit from it a bit more before releasing it for everyone out there. I could mention a few more reasons, but there’s no point on denying that the main reason was probably that.

So why now? Well, I know a few developers out there that make heavy use of custom field types, and I’m fairly confident that this utility is going to be very useful for many people, in the same way it’s been for me and some of my colleagues.

Just go and try it

I don’t wanna keep you for long here. The good stuff is in the module. Really. Just download and enable, create a custom field type, and see what you get with it. If you like it, please comment. If you don’t like it, comment too, but be nice! I’m leaving a 4-minute video here showing what the module can do for you. Also, you can get the module generated in the video from here.

[embedded content]

Did you like it? Don’t forget to share it with any dev who might be interested! If you find any issues with it, I’ll be happy to apply patches, but keep in mind this was done as a proof of concept, and even though it works well, it’s not meant to be a perfect tool to cover all possible cases out there!

Final note for developers

There’s a small caveat a colleague mentioned to me recently, that you might need to amend after generating a custom field type, depending on your use case. It’s pretty simple, and I’ll get it sorted in code soon. This issue does not affect the generated module at all, and it’ll work as expected. If you’re not saving the entity that uses your custom field programmatically, this doesn’t affect you. 

If you are, you’ll need to edit the “_field_presave()” hook implementation generated. The reason is that there’s a wrapper array that contains all the columns data for your field, added for better UX in entity forms. When saving an entity from code (not from the entity form), the wrapper won’t be present, as it’s added by the widget form, and Drupal will throw an error. All you need to do is add a check so that the wrapper is only used when the entity is being saved after a form submission (ie by checking the url path, for example). That way, when the entity is saved from any other place, the values of the field will be read just as they were loaded from the database.

Update: The caveat of _field_presave() mentioned above is fixed in 7.x-1.2 branch.

Feb 11 2015
Feb 11

In this article I am going to show you how you can integrate Pushover with your Drupal site. I will illustrate a couple of examples of how you can use Pushover to notify yourself as soon as something happens on your site.

The code I write in this article is also available in this repository so you can just clone that if you want to follow along.

What is Pushover?

Pushover is a web and mobile application that allows you to get real time notifications on your mobile device. The way it works is that you install an app on your Android or Apple device and using a handy API you can send that app notifications. The great thing about this is that it happens more or less in real time (depending on your internet connection) as Pushover uses the Google and Apple servers to send the notifications.

The price is also very affordable. At a rate of $4.99 USD per platform (Android, Apple or desktop) paid only once, you can use it on any number of devices under that platform. And you also get a 5 day trial period for free the moment you create your account.

What am I doing here?

In this article I am going to set up a Pushover application and use it from my Drupal site to notify my phone of various events. I will give you two example use cases that Pushover can be handy with:

  • Whenever an anonymous user posts a comment that awaits administrative approval, I’ll send a notification to my phone
  • Whenever the admin user 1 logs into the site, I’ll send an emergency notification to my phone (useful if you are the only user of that admin account).

Naturally, these are examples and you may not find them useful. But they only serve as illustration of the power you can have by using Pushover.

Pushover account

First, a quick look at creating your Pushover account. To follow along, go on over to Pushover and sign up if you haven’t already and you can start your 5 day free trial. Then, go to Google Play or the App Store and install the app on your device. You’ll need to log in to your account and give the device a name. Mine is called simply Nexus (hint: I don’t have an iPhone).

Go back then to the Pushover website and you can already test it out by sending yourself a test notification to one or all of your active devices. You can even specify which sound it should make.

Next, if you want to use the API, you’ll need to create an application. This will generate for you an app_token to use later on. And that should be pretty much it.

Drupal

Now that the Pushover account creation is taken care of, the device app is installed (Android in my case) and I have my Pushover application, it’s time to see how I can achieve the goal set out in the beginning. As a brief roadmap, I will do the following:

  • Create a custom Drupal module
  • Add to it the Pushover class created by Chris Schalenborgh (a handy wrapper over the curl calls to the API)
  • Implement some hooks that will trigger the notifications based on certain conditions
  • Profit

The custom module I’ll be working with in this example is called pushover and it contains a pushover.info and a pushover.module file (as required). Additionally, I will create a lib/Pushover folder in it to store the external class I will use to connect to Pushover. There are other – more recommended ways – of importing external libraries into Drupal modules (see the Libraries API), but for the sake of brevity, this will work just fine.

The first thing I want to do in my pushover.module file is to import this external class. Based on my folder structure, I can do so with this line:

require_once(DRUPAL_ROOT . '/' . drupal_get_path('module', 'pushover') . '/lib/Pushover/Pushover.php');

Next, I want to create a reusable helper function that will return a pushable object. That is an object already populated with my own defaults (such as credentials) and that takes some parameters for the more common properties. But first, I want to put my Pushover account credentials into the settings.php file because they do not belong in my git repository:

$conf['pushover_credentials'] = array(
  'user_token' => 'uCpygdjfsndfi7233sdasdo33Yv',
  'app_token' => 'aKH8Nwsdasdanl342jmsdaBWgoVe',
);

Obviously neither of these tokens are now valid but if you are following along, you should replace them with yours: the user_token you get on the main page as you log in to the Pushover website and the app_token is the one generated for your application.

Then I can continue with my helper function I mentioned earlier:

/**
 * Helper function that returns a pushable object using the Pushover class
 * 
 * @param $vars
 * @return bool|Pushover
 */
function pushover_get_pushable($vars) {
  global $conf;
  if (isset($conf['pushover_credentials'])) {
    $push = new Pushover();
    $push->setToken($conf['pushover_credentials']['app_token']);
    $push->setUser($conf['pushover_credentials']['user_token']);
    $push->setTitle($vars['title']);
    $push->setMessage($vars['message']);
    if (isset($vars['url'])) {
      $push->setUrl($vars['url']);
    }
    if (isset($vars['device'])) {
      $push->setDevice($vars['device']);
    }
    $push->setTimestamp(time());

    return $push;
  }
  else {
    return FALSE;
  }
}

In this function I instantiate a new Pushover object if there are credentials in my settings.php file. Otherwise, I fail silently by returning false. The function takes some parameters that are set on the object: title and message are mandatory whereas the url and device are not. The device is actually the name of the device to which you want to restrict this notification.

Additionally, I set the current timestamp and then return the object.

Next, it’s time to use this function inside some hooks. The first one is going to be hook_comment_insert():

/**
 * Implements hook_comment_insert().
 */
function pushover_comment_insert($comment) {

  // Send a push notification if a new comment is created by an anonymous user
  // and it is not yet published.
  if ($comment->status == 0 && $comment->is_anonymous == TRUE) {
    global $base_url;
    $vars = array(
      'title' => 'New comment on ' . variable_get('site_name') . '!',
      'message' => 'Subject: ' . $comment->subject,
      'url' => $base_url . '/node/' . $comment->nid . '#comment-' . $comment->cid,
      'device' => 'Nexus'
    );
    $pushable = pushover_get_pushable($vars);
    if ($pushable) {
      $pushed = $pushable->send();
      if ($pushed == false) {
        watchdog('Pushover', t('A comment has been created but there was an error pushing that over.'), array(), WATCHDOG_ERROR, NULL);
      }
    }
  }
}

In here I check if the commenter is anonymous and the status is 0 (to be on the safe side). If that’s the case, I build the array of parameters for my helper function with some information about the site and comment and use the send() method to send the notification. You’ll notice that I restricted this to the Nexus device.

At the end, I check whether the notification went out successfully (the send() method returns false if the Pushover service does not return the success status code of 1. If something went wrong, I quickly log it to the watchdog.

So now if an anonymous user writes a comment, I get a push notification with the site name, comment subject and URL. Nifty.

Now let’s turn to the second example in which I implement an emergency notification if my admin user logs into the site. If it’s not me, I’ll know something is wrong and I probably got hacked. I do this inside a hook_user_login() implementation:

/**
 * Implements hook_user_login().
 */
function pushover_user_login(&$edit, $account) {
  // If the admin user logs in, send a push notification.
  if ($account->uid == 1) {
    $whitelist = array('1.1.1.1');
    if (!in_array(ip_address(), $whitelist)) {
      global $base_url;
      $vars = array(
        'title' => 'Admin user sign in',
        'message' => 'Admin user has logged into this site: ' . variable_get('site_name') . '!',
        'url' => $base_url,
      );
      $pushable = pushover_get_pushable($vars);
      if ($pushable) {
        $pushable->setPriority(2);
        $pushable->setRetry(30);
        $pushable->setExpire(60);
        $pushed = $pushable->send();
        if ($pushed == false) {
          watchdog('Pushover', t('An admin user has logged into the site but there was an error pushing this over.'), array(), WATCHDOG_ERROR, NULL);
        }
      }
    }
  }
} 

In here, I first check if the user logging in is the one with the id of 1 (the main admin user). Then I create an array with whitelisted IPs to check against the user logging in. I don’t want to get notified if I log in from home or from the office (1.1.1.1 is just an example ip address).

Then just like before, I get my pushable object with the usual variables (no device this time, I want this to go out to all my devices). On top of those, I set a priority of 2 (marking it an emergency notification), a retry value of 30 seconds and an expire value of 60 seconds. The latter 2 in combination with the priority make it so that if left unacknowledged by my phone, the notification gets resent every 30 seconds for a total duration of 60 seconds. For more information about the possible options you have with Pushover, make sure you check out their API docs.

And there it is. I will now get an emergency notification if someone logs in with my admin account. Of course, not very good if many people can log in with that account, but you get the point.

Conclusion

In this tutorial I showed you how you can use Pushover from your Drupal site to send notifications to your phone when certain events occur. I covered 2 examples but I’m sure you can find more. I would like to also mention that I found a contrib Drupal module called Pushover which uses Rules in order to send out Pushover notifications. I haven’t really used it, but make sure to check it out if you want to use Pushover and your site is already making use of the Rules module. Or, you know, you hate writing code.

Nov 05 2014
Nov 05

In September, Phase2 released a Grunt-based tool for building and testing Drupal sites. We have been working on the tool since January, and after adopting it as part of our standard approach for new Drupal sites, we wanted to contribute it back to the community. We are happy invite you to get started with Grunt Drupal Tasks.

Grunt is a popular JavaScript-based task runner, meaning it’s a framework for automating tasks. It’s gained traction for automating common development tasks, like compiling CSS from Sass, minifying JavaScript, generating sprites, checking code standards, and more. There are thousands of plugins available that can be implemented out-of-the-box to do these common tasks or integrate with other supporting tools. (I mentioned that this is all free and open source software, right?)

Grunt Drupal Tasks is a Grunt plugin that defines processes that we have identified as best practices for building and testing Drupal sites.

Building Drupal

The cornerstone of Grunt Drupal Tasks is the “build” process, which assembles a runnable Drupal site docroot from a Drush make file and custom code and configuration.

The make file defines the version of Drupal core to use, the contrib modules, themes, and libraries to download, and even patches to apply to any of these components. The make file can include components released on Drupal.org or stored in public or private repositories. For patches, our best practice is to reference patches hosted on Drupal.org and associated with an issue. With these options, the entire set of components for a Drupal site can be declared in a make file and consistently retrieved using Drush.

After the Drush make process assembles all external dependencies for the project, the Grunt Drupal Tasks build process adds custom code and configuration. This includes custom installation profiles, modules, and themes, as well as “sites” directory files, like sites.php and settings.php for one or many subsites, and other “static” files to override, like .htaccess and robots.txt. These custom components are added to the built docroot by symlink, so it is not necessary to rebuild for every update to custom source code.

These steps results in a Drupal docroot assembled from custom source in the following structure:

src/ modules/ profiles/ sites/ default/ settings.php static/ themes/ project.make 1 2 3 4 5 6 7 8 9 10 11 12 13 14 src/   modules/     custom modules>  profiles/     custom installation profiles>   sites/     default/       settings.php     optionally, other subsites or sites.php>   static/     optionally, overrides for .htaccess or other files>   themes/     custom themes>   project.make

Grunt Drupal Tasks includes other optional build steps, which can be enabled as needed for projects. One such task is the “compile theme” step will compile Sass files into CSS.

This build process gives us a reliable way for assembling Drupal core and contrib components, for adding our custom code, and integrating development tools like Sass. By using Grunt to automate this procedure, it becomes a portable script that can be shared among the project’s developers and used in deployment environments.

Testing Drupal

In order to help make best practices the default, Grunt Drupal Tasks includes support for a number of code quality and testing tools.

A “validate” task is provided that includes checking basic PHP syntax and Drupal coding standards using PHPLint and PHP Code Sniffer. We highly recommended that developers use this command while coding, and have included it as part of the default build process.

An “analyze” task is also provided, which adds support for the PHP Mess Detector. This task may be longer-running, so it is better suited to run as part of a continuous integration system, like Jenkins.

Finally, a “behat” task is provided for running test scenarios with Behat and the Drupal Extension. This encourages writing Behat tests for the project and committing them with the project code and build tools, so the tests can be run by other developers and in the integration environment by a continuous integration system.

Scaffolding for Drupal Projects

The old starting point for Drupal projects was a vanilla copy of Drupal core. Grunt Drupal Tasks offers scaffolding for Drupal projects that starts with Drush make, integrates custom code and overrides, and provides consistent support for a variety of developer tools.

This scaffolding is provided through the example included with Grunt Drupal Tasks, which is the recommended starting point for new projects. The scaffold structure adds a layer above the aforementioned “src” directory; this layer includes code and configuration related to Grunt Drupal Tasks (Gruntconfig.json and Gruntfile.js), dependencies for the supporting tools (composer.json), and other resources for the tools (features/, behat.yml, and phpmd.xml).

The example includes the following:

features/ src/ .gitignore Gruntconfig.json Gruntfile.js behat.yml composer.json package.json phpmd.xml 1 2 3 4 5 6 7 8 9 features/src/.gitignoreGruntconfig.jsonGruntfile.jsbehat.ymlcomposer.jsonpackage.jsonphpmd.xml

For full documentation on starting a new project with Grunt Drupal Tasks, see CONFIG.md.

Learning More

Watch the Phase2 blog for more information about Grunt Drupal Tasks. If you are attending the Bay Area Drupal Camp this week, please check out my session on Using Grunt to Manage Drupal Build and Testing Tools.

Sep 09 2012
Sep 09

Unbeknown to many users, installation profiles are what is used to install a Drupal site. The two profiles that ship with core are standard and minimal. Standard gives new users a basic, functional Drupal site. Minimal provides a very minimal configuration so developers and site builders can start building a new site. A key piece of a Drupal distro is an installation profile.

I beleive that developers and more experienced site builders should be using installation profiles as part of their client sites builds. In Drupal 7 an installation profile is treated like a special module, so it can implement hooks - including hook_update_N(). This means that the installation profile is the best place for controlling turning modules on and off, switching themes or any other site wide configuration changes that can't be handled by features or a module specific update hook.

In an ideal world you could have 1 installation profile that is used for all of your projects and you just include it in your base build. Unfortunately installation profiles tend to evolve into being very project specific. At the same time you are likely to want a common starting point. I like to give my installation profiles unique names, rather than something generic like "my_profile", I prefer to use "[client_prefix]_profile". I'll cover project prefixes in another blog post.

After some trial and error, I've settled on a solution which I think works for having a common starting point for an installation profile that will diverge overtime using a unique namespace. My solution relies on some basic templates, a bash script with a bit of sed. I could have written all of this in PHP and even made a drush plugin for it, but I prefer to do this kind of thing on the command line with bash. I'm happy to work with someone to port it to a drush plugin if you're interested.

Here is a simple example of the templates you could use for creating your installation profile. The version on github is closer to what I actually use for clients, along with the build script.

base.info

name = PROFILE_NAME
description = PROFILE_DESCRIPTION
core = 7.x
dependencies[] = block
dependencies[] = dblog

base.install

<?php
/**
 * @file
 * Install, update and uninstall functions for the the PROFILE_NAME install profile.
 */

/**
 * Implements hook_install().
 *
 * Performs actions to set up the site for this profile.
 *
 * @see system_install()
 */
function PROFILE_NAMESPACE_install() {
  // Enable some standard blocks.
  $default_theme = variable_get('theme_default', 'bartik');
  $values = array(
    array(
      'module' => 'system',
      'delta' => 'main',
      'theme' => $default_theme,
      'status' => 1,
      'weight' => 0,
      'region' => 'content',
      'pages' => '',
      'cache' => -1,
    ),
    array(
      'module' => 'user',
      'delta' => 'login',
      'theme' => $default_theme,
      'status' => 1,
      'weight' => 0,
      'region' => 'sidebar_first',
      'pages' => '',
      'cache' => -1,
    ),
    array(
      'module' => 'system',
      'delta' => 'navigation',
      'theme' => $default_theme,
      'status' => 1,
      'weight' => 0,
      'region' => 'sidebar_first',
      'pages' => '',
      'cache' => -1,
    ),
    array(
      'module' => 'system',
      'delta' => 'management',
      'theme' => $default_theme,
      'status' => 1,
      'weight' => 1,
      'region' => 'sidebar_first',
      'pages' => '',
      'cache' => -1,
    ),
    array(
      'module' => 'system',
      'delta' => 'help',
      'theme' => $default_theme,
      'status' => 1,
      'weight' => 0,
      'region' => 'help',
      'pages' => '',
      'cache' => -1,
    ),
  );
  $query = db_insert('block')->fields(array('module', 'delta', 'theme', 'status', 'weight', 'region', 'pages', 'cache'));
  foreach ($values as $record) {
    $query->values($record);
  }
  $query->execute();

  // Allow visitor account creation, but with administrative approval.
  variable_set('user_register', USER_REGISTER_VISITORS_ADMINISTRATIVE_APPROVAL);

  // Enable default permissions for system roles.
  user_role_grant_permissions(DRUPAL_ANONYMOUS_RID, array('access content'));
  user_role_grant_permissions(DRUPAL_AUTHENTICATED_RID, array('access content'));
}

// Add hook_update_N() implementations below here as needed.

base.profile

<?php
/**
 * @file
 * Enables modules and site configuration for a PROFILE_NAME site installation.
 */

/**
 * Implements hook_form_FORM_ID_alter() for install_configure_form().
 *
 * Allows the profile to alter the site configuration form.
 */
function PROFILE_NAMESPACE_form_install_configure_form_alter(&$form, $form_state) {
  // Pre-populate the site name with the server name.
  $form['site_information']['site_name']['#default_value'] = $_SERVER['SERVER_NAME'];
}

Some developers might recognise the code above, it is from the minial installation profile.

The installation profile builder script is a simple bash script that relies on sed.

build-profile.sh

#!/bin/bash
#
# Installation profile builder
# Created by Dave Hall http://davehall.com.au
#

FILES="base.info base.install base.profile"
OK_NS_CHARS="a-z0-9_"
SCRIPT_NAME=$(basename $0)

namespace="my_profile"
name=""
description="My automatically generated installation profile."
target=""

usage() {
  echo "usage: $SCRIPT_NAME -t target_path -s profile_namespace [-d 'project_descrption'] [-n 'human_readable_profile_name']"
}

while getopts  "d:n:s:t:h" arg; do
  case $arg in
    d)
      description="$OPTARG"
      ;;
    n)
      name="$OPTARG"
      ;;
    s)
      namespace="$OPTARG"
      ;;
    t)
      target="$OPTARG"
      ;;
    h)
      usage
      exit
      ;;
  esac
done

if [ -z "$target" ]; then
  echo ERROR: You must specify a target path. >&2
  exit 1;
fi

if [ ! -d "$target" -o ! -w "$target" ]; then
  echo ERROR: The target path must be a writable directory that already exists. >&2
  exit 1;
fi

ns_test=${namespace/[^$OK_NS_CHARS]//}
if [ "$ns_test" != "$namespace" ]; then
  echo "ERROR: The namespace can only contain lowercase alphanumeric characters and underscores ($OK_NS_CHARS)" >&2
  exit 1
fi

if [ -z "$name" ]; then
  name="$namespace";
fi

for file in $FILES; do
  echo Processing $file
  sed -e "s/PROFILE_NAMESPACE/$namespace/g" -e "s/PROFILE_NAME/$name/g" -e "s/PROFILE_DESCRIPTION/$description/g" $file > $target/$file
done

echo Completed generating files for $name installation profile in $target.


Place all of the above files into a directory. Before you can generate your first profile you must run "chmod +x build-profile.sh" to make the script executable.

You need to create the output directory, for testing we will use ~/test-profile, so run "mkdir ~/test-profile" to create the path. To build your profile run "./build-profile.sh -s test -t ~/test-profile". Once the script has run you should have a test installation profile in ~/test-profile.

I will continue to maintain this as a project on github.

Bookmark/Search this post with

Jun 09 2011
Jun 09

One of my development goals is to learn how to set up continuous integration so that I’ll always remember to run my automated tests. I picked up the inspiration to use Hudson from Stuart Robertson, with whom I had the pleasure of working on a Drupal project before he moved to BMO. He had set up continuous integration testing with Hudson and Selenium on another project he’d worked on, and they completed user acceptance testing without any defects. That’s pretty cool. =)

I’m a big fan of automated testing because I hate doing repetitive work. Automated tests also let me turn software development into a game, with clearly defined goalposts and a way to keep score. Automated tests can be a handy way of creating lots of data so that I can manually test a site set up the way I want it to be. I like doing test-driven development: write the test first, then write the code that passes it.

Testing was even better with Rails. I love the Cucumber testing framework because I could define high-level tests in English. The Drupal equivalent (Drucumber?) isn’t quite there yet. I could actually use Cucumber to test my Drupal site, but it would only be able to test the web interface, not the code, and I like to write unit tests in addition to integration tests. Still, some automated testing is better than no testing, and I’m comfortable creating Simpletest classes.

Jenkins (previously known as Hudson) is a continuous integration server that can build and test your application whenever you change the code. I set it up on my local development image by following Jenkins’ installation instructions. I enabled the Git plugin (Manage Jenkins – Manage Plugins – Available).

Then I set up a project with my local git repository. I started with a placeholder build step of Execute shell and pwd, just to see where I was. When I built the project, Hudson checked out my source code and ran the command. I then went into the Hudson workspace directory, configured my Drupal settings.php to use the database and URL I created for the integration site, and configured permissions and Apache with a name-based virtual host so that I could run web tests.

For build steps, I used Execute shell with the following settings:

mysql -u integration integration < sites/default/files/backup_migrate/scheduled/site-backup.mysql
/var/drush/drush test PopulateTestUsersTest
/var/drush/drush test PopulateTestSessionsTest
/var/drush/drush testre MyProjectName --error-on-fail

This loads the backup file created by Backup and Migrate, sets up my test content, and then uses my custom testre command.

Code below (c) 2011 Sacha Chua ([email protected]), available under GNU General Public License v2.0 (yes, I should submit this as a patch, but there’s a bit of paperwork for direct contributions, and it’s easier to just get my manager’s OK to blog about something…)

// A Drush command callback.
function drush_simpletest_test_regular_expression($test_re='') {
  global $verbose, $color;
  $verbose = is_null(drush_get_option('detail')) ? FALSE : TRUE;
  $color = is_null(drush_get_option('color')) ? FALSE : TRUE;
  $error_on_fail = is_null(drush_get_option('error-on-fail')) ? FALSE : TRUE;
  if (!preg_match("/^\/.*\//", $test_re)) {
    $test_re = "/$test_re/";
  }
  // call this method rather than simpletest_test_get_all() in order to bypass internal cache
  $all_test_classes = simpletest_test_get_all_classes();

  // Check that the test class parameter has been set.
  if (empty($test_re)) {
    drush_print("\nAvailable test groups & classes");
    drush_print("-------------------------------");
    $current_group = '';
    foreach ($all_test_classes as $class => $details) {
      if (class_exists($class) && method_exists($class, 'getInfo')) {
        $info = call_user_func(array($class, 'getInfo'));
        if ($info['group'] != $current_group) {
          $current_group = $info['group'];
          drush_print('[' . $current_group . ']');
        }
        drush_print("\t" . $class . ' - ' . $info['name']);
      }
    }
    return;
  }

  // Find test classes that match
  foreach ($all_test_classes as $class => $details) {
    if (class_exists($class) && method_exists($class, 'getInfo')) {
      if (preg_match($test_re, $class)) {
        $info = call_user_func(array($class, 'getInfo'));
        $matching_classes[$class] = $info;
      }
    }
  }

  // Sort matching classes by weight
  uasort($matching_classes, '_simpletest_drush_compare_weight');

  foreach ($matching_classes as $class => $info) {
    $main_verbose = $verbose;
    $results[$class] = drush_simpletest_run_single_test($class, $error_on_fail);
    $verbose = $main_verbose;
  }

  $failures = $successes = 0;
  foreach ($results as $class => $status) {
    print $status . "\t" . $class . "\n";
    if ($status == 'fail') {
      $failures++;
    } else {
      $successes++;
    }
  }
  print "Failed: " . $failures . "/" . ($failures + $successes) . "\n";
  print "Succeeded: " . $successes . "/" . ($failures + $successes) . "\n";
  if ($failures > 0) {
    return 1;
  }
}

I didn’t bother hacking Simpletest output to match the Ant/JUnit output so that Jenkins could understand it better. I just wanted a pass/fail status, as I could always look at the results to find out which test failed.

What does it gain me over running the tests from the command-line? I like having the build history and being able to remember the last successful build.

I’m going to keep this as a local build server instead of setting up a remote continuous integration server on our public machine, because it involves installing quite a number of additional packages. Maybe the other developers might be inspired to set up something similar, though!

2011-06-09 Thu 09:51

Nov 23 2010
Nov 23

This post is a followup to a presentation I gave at the Pacific North West Drupal Summit in Vancouver held back in October.

Background

We've discussed using the Simpletest module in a previous post, Drupal Simpletest Module Abridged. Simpletest is a powerful way to know that the code powering your Drupal site is operating correctly and that new functionality is not breaking what has already been implemented.

Running tests in the browser is time consuming. In this post we'll look at ways to automate the testing process. By the time you're finished reading, you'll know how to receive notifications with test results for your custom code each time you make a commit to your remote version control repository. And you'll also know how to configure daily test runs that cover all available tests including core and contrib.

Along with explaining how to test projects where the entire codebase is stored in version control, I will also describe testing a distribution. The examples given can be used for both Drupal 6.x and Drupal 7.x.

Assumptions

  • Your code is stored in a Git repository
  • Your Git repository supports POST callbacks (more on that to come)
  • You are using the latest version of drush, and for distributions drush_make
  • You will install a testing environment that is a functional copy of your Drupal project. For the following examples this site is http://test.example.com

Introducing automated testing

To get a good understanding of the workflow we will be creating, here is an overview of a common development workflow when running tests in the browser.

  1. Code new functionality
  2. Write a test for that functionality
  3. Run the test in the browser
  4. Wait for test results
  5. Possibly make fixes and return to item 3
  6. Eventually run all the available tests
  7. Wait a long time

In an ideal world you would run all available tests after you are satisfied with your new code and test. But then you'd end up staring at the browser for an hour or so.

Here is the development workflow we will create using automated testing.

  1. Code new functionality
  2. Write a test for that functionality
  3. Run the test using drush
  4. Wait for test results
  5. Possibly make fixes and return to item 3
  6. Commit code into version control and move to next task
  7. Receive a notification with the results of all your custom tests

Lets take a look at how we'll use drush to run tests from the command line.

Installing a drush test runner

UPDATE: Nov. 29, 2010: Drush HEAD now contains an updated version of the test.drush.inc file mentioned below. The drush command is now "test run" not "test". If you are using Drush HEAD >= Nov 29, 2010 or a Drush version > 3.3 please review the updated usage.

A test runner is a command used by a human or a continuous integration server to run tests. We're going to install a drush command that can be used both for test automation and for running tests on the command line. Drush commands live in your ~/.drush directory. If you do not already have one, create it and download the following drush command:

$ git clone git://github.com/sprice/drush_test.git ~/.drush/drush_test

This version of test.drush.inc is based on the Drupal 6.x command created by Young Hahn. It has been updated to work with both Drupal 6.x and Drupal 7.x.

Running tests with Drush

To get a list of all the tests available cd to your webroot and type:

$ drush test --uri=http://test.example.com

Here is a look at a number of tests available:

Using drush you can run a single test, a class of tests, or all tests.

$ drush test --uri=http://test.example.com BlockAdminThemeTestCase will test a single test.

$ drush test --uri=http://test.example.com Block will test all six of the block tests.

$ drush test --uri=http://test.example.com all will test all tests. This includes all the core and contrib tests and all the custom tests you have written for your project code.

When creating custom tests, it is a good idea to use the same test class for each one. That way drush can run all your custom tests with a single command.

Note that there currently seems to be an issue testing the included D7.x Standard profile with this drush test runner. There is no problem testing the D7.x Minimal profile or the custom profile I include later in this post.

Choosing a continuous integration server

A continuous integration server is the primary tool that makes test automation possible. There are many options available though the choice I have seen most often used with Drupal projects is Hudson.

For this post we will be using the small and easy to configure CI Joe. I have enjoyed using CI Joe on projects because it is simple to install and configure and because it does the things that I want and nothing more.

Installing CI Joe

OSX 10.6

It can be helpful to have a basic test setup on your local development environment. However this won't be an automated system.

$ gem install cijoe

$ echo "export PATH=/Users/username/.gem/ruby/1.8/bin:\$PATH" >> ~/.profile

$ source .profile

Linux

The following has been tested on Ubuntu 10.10. Similar instructions will probably work for other distributions though package versions may vary.

$ sudo apt-get install ruby

$ sudo apt-get install rubygems

$ sudo gem install cijoe

Add /var/lib/gems/1.8/bin to your PATH.

$ sudo nano ~/.bashrc

PATH=$PATH:/var/lib/gems/1.8/bin

$ source ~/.bashrc

Installing a test environment

It's important to note that you want to have a Drupal site running that is for testing only. Never test your production site (Simpletest should not be installed on production). And it's a good idea not to test your development or staging sites either. Automated testing will slow the sites down and the purpose of introducing the practices is to increase efficiency.

As outlined in the assumptions of this post I expect you are familiar with installing and configuring a Drupal site. To begin, create a new duplicate of your project site with it's own codebase and own database.

The following examples will assume you will install your site to /var/www/test.example.com and that the site is available at http://test.example.com.

Configuring CI Joe

We need to add some configuration options to CI Joe. All configuration is done via Git config options. We're going to define a test runner command, select a branch of our version control repository to test, and protect the CI server with HTTP Basic Auth. We're also going to configure CI Joe to queue multiple test runs. For more information about CI Joe configuration see the documentation.

For this example I'm going to clone my project code and configure CI Joe to run tests within the CustomTestCase.

$ cd /var/www

$ git clone git://github.com/user/example.git test.example.com

$ cd test.example.com

$ git config --add cijoe.runner "drush test --uri=http://test.example.com/ CustomTestCase"

$ git config --add cijoe.branch develop

$ git config --add cijoe.user username

$ git config --add cijoe.pass secret

$ git config --add cijoe.buildallfile tmp/cijoe.txt

Start CI Joe on port 4567

$ cijoe -p 4567 .

Visit CI Joe at http://test.example.com:4567 using your username and password for access.

Click "Build". You'll need to refresh your browser to see when CI Joe has finished building.

When you click "Build", CI Joe checks out the latest code from the develop branch and executes the test runner command. CI Joe is agnostic when it comes to what runner to use. As long is it returns a non-zero exit status when it fails and a zero exit status when it passes, it just works. This also means that the run-tests.sh PHP script included in Drupal 7 core won't work as it doesn't exit in a way that CI Joe and some other continuous integration servers expect.

Getting notified

Two git hooks handle notifications. Depending on the test outcome, .git/hooks/build-failed or .git/hooks/build-worked will be executed after a build run. These are shell scripts and can do whatever you want them to. For our example we will simply have them send an email. Make sure they are executable.

$ nano /var/www/test.example.com/.git/hooks/build-failed

$ chmod +x /var/www/test.example.com/.git/hooks/build-failed

$ nano /var/www/test.example.com/.git/hooks/build-worked

$ chmod +x /var/www/test.example.com/.git/hooks/build-worked

The examples included in the CI Joe project include scripts that use the mailutils package.

Now when a build runs you'll receive an email letting you know whether your tests passed or failed. Go ahead and click "Build" again to see.

Automation

We now have the testing system configured so that clicking the "Build" button is all that is needed in order to be notified of test results. The last step is to automate the build process so that every time commits are pushed to the remote repository, a build will be triggered.

Most hosted Git repositories support POST URL's. GitHub calls them Post-Receive URL's, Unfuddle calls them Repository Callbacks. When configured, any time the git repository receives a commit, it will POST information about the commit to the URL provided.

Luckily for us, CI Joe doesn't care how the POST is formatted. If CI Joe receives any POST at all a build will be triggered. That means that this will work:

$ curl -d "This is a POST" http://username:[email protected]:4567

Enter the URL of your test server as described in the above command, including the username, password, and port into your Git repositories POST URL option. In GitHub that is found in Admin -> Service Hooks -> Post-Receive URL's.

That's it! You've now configured your project to include automated continuous integration. Push some commits to your remote repository, wait for an email and then pat yourself on the back.

Testing a Drupal distribution

I'll include a note about testing distributions since that's how many are building their Drupal projects these days. A distribution contains a manifest of the code used in the project, an installation profile and likely custom code as well. When you make a change to your project you're sometimes only updating a module version in the .make file. So if you click "Build" in CI Joe it will check out the new .make file but the related code in your project will still be the same.

To solve this problem we will rebuild the codebase on each test run. This may sound crazy to some, but it's important to remember that it's best to commit locally as often as you can and only push to your remote repository a few times a day. If you really want to speed things up you can always use Squid as a caching server for drush module downloads.

Here is a demonstration Drupal 7 distribution so you can get a good understanding of how it works.

Copy the simple_distro.make file to your system and use drush_make to install your project codebase.

$ drush make --working-copy simple_distro.make /var/www/test.example.com

Now one more Git hook needs to be configured that will trigger the project rebuild before running the tests, .git/hooks/after-reset. Make sure the script is executable.

$ nano .git/hooks/after-reset

$ chmod +x .git/hooks/after-reset

This script will cd to your profile directory and run the rebuild.sh script to trigger a rebuild. I've added the -y flag as I'm passing that to my rebuild.sh script.

Configure your git hooks noting that your project repository is in /var/www/test.example.com/profiles/profilename

Complete daily test builds

Earlier I discussed that running all available tests is time consuming. Given that Drupal core and many contrib modules have good test coverage we're not going to worry about running those tests on every remote commit. We'll use cron to run all available tests once a day. Install a second testing platform for this purpose.

Configure the second test environment at: /var/www/daily.test.example.com Make changes to your notification scripts so that it's clear that these are daily test runs. Finally change the test runner git config option to be:

$ git config --add cijoe.runner "drush test --uri=http://daily.test.example.com/ all"

You can run many CI Joe apps on a single machine as long as they are on separate ports. Start CI Joe as described above using port 4568.

Now use cron to enable the tests to run at 1:00am.

Setup the cron job:

$ sudo crontab -e

* 1 * * * curl -d "POST" http://username:[email protected]:4568

Each morning you'll know whether your entire test suite is passing of not. Get ready to start sleeping easier each night.

Conclusion

This post has described how to configure your development environment to include automated continuous integration. By writing tests for the important functionality of your project you'll ensure the integrity of your code over time. By automating the testing process you reduce to zero the individual effort to run tests, and ensure your team is notified if there is a problem. Happy testing.

Apr 02 2009
Apr 02

Play Video

Movie link (right-click to download) Problem with this video? Contact me

The definition of a workflow, according to Wikipedia, is a "depiction of a sequence of operations". When taken at face value, a workflow is typically something you want to automate in Drupal. In other words, what we're talking about is Drupal Automation and my guess is, you'll want to automate things in Drupal based on certain events.

The confusing part of Drupal automation is the fact that you need to know what works together to accomplish such automation. With the modules of Workflow, Actions*, Triggers* (* Part of Drupal 6 core) and Rules, it can be quite confusing when it comes to automation.

Put simply, in Drupal 6, you can still use the Workflow module (which I don't show in the video) but you can also get by with the default Triggers and Actions (as mentioned, already installed with Drupal). However, unless you know you need to install Workflow module to achieve the automation you're seeking, and potentially install Triggerunlock module (which you don't really need if you install Workflow in Drupal 6), things can get really confusing, really fast.

So, aside from trying to confuse you with the above paragraph (I did that on purpose), about all the setup and things you do or don't need, I focused a bit more on the Rules module because it's a one-stop-shop for automation which goes beyond what you can do with Triggers, Actions and Workflow.

Even though I'm a convert to Drupal's Rules module, my compliments go out to John VanDyk (whose book helped me learn Drupal even better) for creating Workflow and Actions (which I started with in Drupal 4.7) and to Wolfgang Ziegler for creating the ever powerful Rules module. I can only hope this video will provide you with most everything you need to know to automate things so you can achieve your ultimate Drupal workflow!

Note: If you're still working with Drupal 5, then Workflow, Actions, and Workflow-Ng (also by Wolfgang) will give you the automation you need. In most all cases, you'll definitely want Token module installed so you can do the cool stuff the big boys do.

Mar 20 2009
Mar 20

After the announcement of our testing service I asked for feedback from prominent community members. The feedback was overwhelmingly positive with some notable quotes being:

Usability testing of Drupal 7 would have been virtually impossible if it weren't for the automated testing in place keeping core stable. Drupal 7 HEAD is much more stable than any release we [have] ever had. It easily saved me a 100 hours knowing I didn't break something I wrote an hour ago. I'm not sure I would have been able to have completed it [DBNTG]. Testing.drupal.org went live in October 2008, and once again, Drupal's development process was reveolutionized. Now, developers don't need to sit through a test run (which could take 30 minutes or more) in order to verify their changes are working; they can simply upload their changes and be informed by one of the testing clients.

In order to allow Drupal shops to gain the same advantages that Drupal core has received we offer our testing service. Through the service we will maintain an automated testing network similar to testing.drupal.org for use by our clients. In addition we can review your tests to ensure that they are up to standards and provide feedback on ways to improve them.

We are currently working to provide testing of "data sets" which is extremely useful for testing changes against deployed sites. The data from a deployed site can be backed up and tests run against that dataset to ensure that code changes do not break custom workflows or configurations. Testing of data sets does not replace functional testing like that used in Drupal 7 core, but it provides an additional layer of confidence when deploying changes to a live site.

Feb 28 2009
Feb 28

Boombatower Development is adding automated testing as a new service. Boombatower Test Services (BTS) will provide a manageable solution that requires minimal setup.

One of the most exciting features included in Drupal 7 is the testing framework and the large number of tests that come with it. The testing framework has introduced a new development paradigm into core development that has opened up a number of exciting possibilities in addition to maintaining a very stable development version of Drupal core. As respected core developer Károly "chx" Négyesi said:

"Drupal 7 HEAD is much more stable than any release we [have] ever had."

The increased stability has allowed for the Drupal 7 development time to be extended and the related code freeze to be shortened. Drupal shops can benefit from the same principle. By having extensive tests in place companies can maintain a higher level of confidence in their products and extend that confidence to their clients.

BTS will use the second generation framework soon to be in place on drupal.org and testing.drupal.org. The framework has proven to be robust and the latest version will provide a number of powerful new features.

In addition to you writing functional tests for your custom modules like those included in Drupal 7 core, Boombatower will be providing a solution that allows for testing with a copy of live data to ensure that code changes do not cause issues with a fully configured site setup. Having both types of tests in place that are automatically run on all patches, commits, or nightly builds will give Drupal shops more confidence when deploying new features or code changes to a live site.

In addition to providing automated testing we also offer consulting services for evaluating tests. Properly written tests will ensure that the results received from automated testing are as useful as possible.

I will be speaking at Drupalcon DC 2009 about the history of the automated testing framework, its present form and the exciting new future. I will stick around afterwards to discuss the session and the launch of our automated testing service. If you are interested please stop by after the session.

Jan 30 2009
Jan 30

When developing Drupal modules or checking out modules I don't know yet, I run an instance of Drupal on my local machine. Sooner or later I feel the need to erase this installation and do a fresh install. The Drupal installation process is pretty straightforward and simple making it a good candidate for automating at least parts of it.

In the Drubuntu Drupal group I found this install script for Drupal that does not only install Drupal but also creates an operating system user, as well as apache2 and DNS configurations. A pretty useful script, but it does too much for my development needs.

[embedded content]

I took it as a basis to create a simpler version that merely installs Drupal. In the screencast I demonstrate using this bash script that you can download here (REMEMBER TO ALWAYS DO BACKUPS!!!) from Drupal's CVS repository.

To use it you may need to modify the server root path or set a password for the MySQL root user. Obviously you need a Bash interpreter and wget for downloading. Have fun installing Drupal even more quickly!

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web