Jul 01 2015
Jul 01

Earlier this year, we launched a new site for the ACLU. The project required a migration from Drupal 6, building a library of interchangeable page components, complex responsive theming, and serious attention to accessibility, security and privacy. In this post, I’ll highlight some of the security and privacy-related features we implemented.

Privacy

As an organization, the ACLU has a strong commitment to protecting individual privacy, and we needed to translate their passion to this issue to their own website. This meant meeting their high standards at a technical level.

The ACLU ensures that technical innovation doesn’t compromise individual privacy, and most of our work around user privacy involved ensuring that third-party services like Facebook and YouTube weren’t accessing user data without their consent.

Social sharing

Facebook “Like” buttons on the site offer a quick way for visitors to follow ACLU content on their Facebook feed. But even if you don’t click to “Like” a page, the tracking scripts behind them – coming straight from Facebook – log your behavior and make note of your visit there. That data can be used for targeted advertising, and the data itself is also a sellable product. Because these buttons are all over the web, Facebook can piece together quite a bit of your browsing history, without your knowledge.

To prevent this from happening to ACLU site visitors, we hooked up a jQuery plugin called Social Share Privacy to make sure that visitors who want to “Like” ACLU can do so while others can avoid Facebook’s data tracking. The plugin initially loads a disabled (greyed-out) button for the Facebook “Like”. If the visitor wants to use the button, they click once to enable it, which then loads Facebook’s iframe, and then a second time to “Like” the page.

ACLU.org Facebook button

The Social Share Privacy jQuery plugin keeps Facebook from tracking you without your consent.

A nice side effect is a faster page load for everyone since we don’t have to load content from Facebook on the initial page render.

Video playback

YouTube and other video sharing sites present a similar privacy problem – the scripts used to embed videos can also dig into visitor data without their consent.

MyTube video embed

A MyTube video embed on ACLU.org.

MyTube is a script that was written by the Electronic Frontier Foundation as a Drupal 5 module, then updated as a Drupal 6 module by students in the Ohio State University Open Source Club. For the ACLU site, we worked on porting the module to Drupal 7 and updating various parts of the code to work with updated APIs (both on the Drupal end and the video host end).

Similar to the Social Share Privacy plugin, MyTube initially loads a static thumbnail image from the embedded video, and to play the video, the visitor gives it an extra click, opting-in to run code from the third-party site.

If you’re interested in helping complete Drupal 7 version of the module, take a look at some of the patches our team submitted, and test them out!

Security

Due to the ACLU’s high profile and involvement with controversial topics, security was a major concern throughout the development process. While we can’t go into too much detail here (because, well, security), here are a few steps we took to keep the site safe.

  1. Scrubbed database back-ups. When developers copy down database instances for local development, sensitive information – such as user emails and passwords – are removed. This is made pretty easy using Drush sql-sync-pipe.
  2. Secure hosting on Pantheon with controlled development environments. In addition, though our development team can make code changes, only a handful of authorized ACLU web managers can actually deploy change to their live site. 
  3. The site runs entirely over SSL via HSTS headers
  4. We only load assets (CSS/JS/images/fonts/embeds/iframes/etc.) from approved sources (via Content Security Policy headers). This reduces XSS risks in modern browsers. 
  5. The previous version of the site had various “action” tools where visitors could log in to sign petitions and send letters to their elected representatives. During the redesign this functionality was moved to a separate site, on a separate codebase, running on separate infrastructure. Any site that allows anonymous visitors to create accounts is, by definition, more susceptible to attack. By splitting into two sites, a successful exploit of one web property will not affect the other.
  6. All custom code was reviewed by multiple members of the Advomatic team, and Tag1 Consulting performed an additional security review, in which our codebase received excellent marks and a glowing recommendation.

When your organization champions a cause, all your activities should embody it, including your website. The ACLU takes these standards very seriously, and so do we. Take a look at some of our other projects to see how we worked with nonprofits on sites that reflect their values. 

You should also check out

Jul 01 2015
Jul 01

Yesterday something significant occurred, http://brandywine.psu.edu launched on the new Polaris 2 Drupal platform. And soon the Abington Campus web site will move to the same platform. And perhaps many more.

Ten years ago this would not have been possible. Not because of the technology but because of the landscape and attitude here at Penn State. Words like 'portal' and 'content management system' were perceived as negatives, as things to avoid, as poorly implemented technologies.

That has changed.

One could argue that moving the Penn State home page and new site to Drupal was the significant event, but I was not convinced. That change could have been an anomaly, a lack of other, better options, or just pure luck. Not that a number of people in the Penn State Drupal community did not spend a great deal of time and effort into presenting Drupal as a viable option, but that once that argument was presented and accepted, the process to actually create the site was... let's say byzantine. So in my mind moving www.psu.edu to Drupal, while radical and important, did not 'count'.

So yesterday's launch of the Brandywine web site confirms not only the success of Drupal at Penn State, but also a change in mindset and attitudes at a much higher and broader level at the University.  Additional possibilities for the use of the Polaris 2 platform may be in the works, hopefully we will learn more about those soon.

Perhaps there will also be a Polaris 3....

Jul 01 2015
Jul 01

One of the options in Nittany Vagrant is to build a local, development version of an existing Drupal site - copying the files and database, then downloading it to the Vagrant VM.  Its is pretty straightforward, but there is the occasional trouble spot.

Here is a short video of how to do it.

Jul 01 2015
Jul 01

When continuing development of a web site, big changes occur every so often. One such change that may occur, frequently as a result of another change, is a bulk update of URLs. When this is necessary, you can greatly improve the response time experienced by your users—as they are redirected from the old path to the new path—by using a handy directive offered in Apache's mod_rewrite called RewriteMap.

At Agaric we regularly turn to Drupal for it's power and flexibility, so one might question why we didn't leverage Drupal's support for handling redirects. When we see an opportunity for our software/system to respond "as early as it can", it is worth investigating how that is handled. Apache handles redirects itself, making it entirely unnecessary to hand-off to PHP, never mind Drupal bootstrapping and retrieving up a redirect record in a database, just to tell a browser (or a search engine) to look somewhere else.

There were two conditions that existed making use of RewriteMap a great candidate. For one, there will be no changes to the list of redirects once they are set: these are for historical purposes only (the old URLs are no longer exposed anywhere else on the site). Also, because we could make the full set of hundreds of redirects via a single RewriteRule—thanks to the substitution capability afforded by RewriteMap—this solution offered a fitting and concise solution.

So, what did we do, and how did we do it?

We started with an existing set of URLs that followed the pattern: http://example.com/user-info/ID[/tab-name]. Subsequently we implemented a module on the site that produced aliases for our user page URLs. The new patten to the page was then (given exceptions for multiple J Smiths, etc via the suffix): http://example.com/user-info/firstname-lastname[-suffix#][/tab-name]. The mapping of ID to firstname-lastname[-suffix#] was readily available within Drupal, so we used an update_hook to write out the existing mappings to a file (in the Drupal public files folder, since we know that's writable by Drupal) . This file (which I called 'staffmapping.txt') is what we used for a simple text-based rewrite map. Sample output of the update hook looked like this:

# User ID to Name mapping:
1 admin-admin
2 john-smith
3 john-smith-2
4 jane-smith

The format of this file is pretty straight-forward: comments can be started on any line with a #, and the mapping lines themselves are composed of {lookupValue}{whitespace}{replacementValue}.

To actually consume this mapping somewhere in our rules, we must let Apache know about the mapping file itself. This is done with a RewriteMap directive, which can be placed in the Server config or else inside a VirtualHost directive. The format of the RewriteMap looks like this: RewriteMap MapName MapType:MapSource. In our case, the file is a simple text file mapping, so the MapType is 'txt'. The resulting string added to our VirtualHost section is then: RewriteMap staffremap txt:/path/to/staffmapping.txt This directive makes this rewrite mapping file available under the name "staffremap" in our RewriteRules. There are other MapTypes, including ones that uses random selection for the replacement values from a text file, using a hash map rather than a text file, using an internal function, or even using an external program or script to generate replacement values.

Now it's time to actually change incoming URLs using this mapping file, providing the 301 redirect we need. The rewrite rule we used, looks like this:

RewriteRule ^user-detail/([0-9]+)(.*) /user-detail/${staffremap:$1}$2 [R=301,L]

The initial argument to the rewrite rule identifies what incoming URLs this rule applies to. This is the string: "^user-detail/([0-9]+)(.*)". This particular rule looks for URLs starting with (signified by the special character ^) the string "user-detail/", then followed by one or more numbers: ([0-9]+), and finally, anything else that might appear at the end of the string: "(.*)". There's a particular feature of regex being used here as well: each of search terms in parenthesis are captured (or tagged) by the regex processor which then provides some references that can be used in the replacement string portion. These are available with $<captured position>—so, the first value captured by parenthesis is available in "$1"—this would be the user ID, and the second in "$2"—which for this expression would be anything else appearing after the user ID.

Following the whitespace is our new target URL expression: "/user-detail/${staffremap:$1}$2". We're keeping the beginning of the URL the same, and then following the expression syntax "${rewritemap:lookupvalue}", which in our case is: "${staffremap:$1}" we find the new user-name URL. This section could be read as: take the value from the rewrite map called "staffremap", where the lookup value is $1 (the first tagged expression in the search: the numeric value) and return the substitution value from that map in place of this expression. So, if we were attempting to visit the old URL /user-detail/1/about, the staffremap provides the value "admin-admin" from our table. The final portion of the replacement URL (which is just $2) copies everything else that was passed on the URL through to the redirected URL. So, for example, /user-detail/1/about includes the /about portion of the URL in the ultimate redirect URL: /user-detail/admin-admin/about

The final section of the sample RewriteRule is for applying additional flags. In this case, we are specifying the response status of 301, and the L indicates to mod_rewrite that this is the last rule it should process.
That's basically it! We've gone from an old URL pattern, to a new one with a redirect mapping file, and only two directives. For an added performance perk, especially if your list of lookup and replacement values is rather lengthy, you can easily change your text table file (type txt) with a HashMap (type dbm) that Apache's mod_rewrite also understands using a quick command and directive adjustment. Following our example, we'll first run:

$> httxt2dbm -i staffrepam.txt -o staffremap.map

Now that we have a hashmap file, we can adjust our RewriteMap directive accordingly, changing the type to map, and of course updating the file name, which becomes:

RewriteMap staffremap dbm:/path/to/staffremap.map

RewriteMap substitutions provide a straight-forward, and high-performance method for pretty extensive enhancement of RewriteRules. If you are not familiar with RewriteRules generally, at some point you should consider reviewing the Apache documentation on mod_rewrite—it's worthwhile knowledge to have.

Jul 01 2015
Jul 01

A little over a year ago we launched the Acquia Certification Program for Drupal. We ended up the first year with close to 1,000 exams taken, which exceeded our goal of 300-600. Today, I'm pleased to announce that the Acquia Certification Program passed another major milestone with over 1,000 exams passed (not just taken).

People have debated the pros and cons of software certifications for years (including myself) so I want to give an update on our certification program and some of the lessons learned.

Acquia's certification program has been a big success. A lot of Drupal users require Acquia Certification; from the Australian government to Johnson & Johnson. We also see many of our agency partners use the program as a tool in the hiring process. While a certification exam can not guarantee someone will be great at their job (e.g. we only test for technical expertise, not for attitude), it does give a frame of reference to work from. The feedback we have heard time and again is how the Acquia Certification Program is tough, but fair; validating skills and knowledge that are important to both customers and partners.

We also made the Certification Magazine Salary Survey as having one of the most desired credentials to obtain. To be a first year program identified among certification leaders like Cisco and Red Hat speaks volumes on the respect our program has established.

Creating a global certification program is resource intensive. We've learned that it requires the commitment of a team of Drupal experts to work on each and every exam. We know have four different exams: developer, front-end specialist, backend specialist and site builder. It roughly takes 40 work days for the initial development of one exam, and about 12 to 18 work days for each exam update. We update all four of our exams several times per year. In addition to creating and maintaining the certification programs, there is also the day-to-day operations for running the program, which includes providing support to participants and ensuring the exams are in place for testing around the globe, both on-line and at test centers. However, we believe that effort is worth it, given the overall positive effect on our community.

We also learned that benefits are an important part to participants and that we need to raise the profile of someone who achieves these credentials, especially those with the new Acquia Certified Grand Master credential (those who passed all three developer exams). We have a special Grand Master Registry and look to create a platform for these Grand Masters to help share their expertise and thoughts. We do believe that if you have a Grand Master working on a project, you have a tremendous asset working in your favor.

At DrupalCon LA, the Acquia Certification Program offered a test center at the event, and we ended up having 12 new Grand Masters by the end of the conference. We saw several companies stepping up to challenge their best people to achieve Grand Master status. We plan to offer the testing at DrupalCon Barcelona, so take advantage of the convenience of the on-site test center and the opportunity to meet and talk with Peter Manijak, who developed and leads our certification efforts, myself and an Acquia Certified Grand Master or two about Acquia Certification and how it can help you in your career!

Jul 01 2015
Jul 01

We can easily checkout code from our git repositories for our local, development, and staging servers.  We can get a database from the live site through Backup and Migrate, drush, or a number of other ways.  But getting the files of the site, the images, pdfs, and everything else in /sites/default/files is not on the top of the list of most developers.  In recent versions of Backup and Migrate, you can export the files, but often times, this can be a huge archive file.  There is an easier way.

The Stage File Proxy module saves the day by sending requests for files to the live server if it does not exist yet in your local environment.  This saves you space on your non-production environment since it only grabs files from the pages you visit.  Great for us that have dozens of sites on our local. 

As simple as can be, it gets the files you need on your local server, as you need them.  No more navigating broken looking dev sites.  This will get your environment looking as it should so you can concentrate on your task at hand.

Installing and Configuring Stage File Proxy


// Download Stage File Proxy
drush dl -y stage_file_proxy
// Enable Stage File Proxy
drush en -y stage_file_proxy
// Set the origin, or where the files live, the production site
drush variable-set stage_file_proxy_origin "http://www.yoursitename.com"

You can also set it in your settings.php file, helpful for those of us who use different settings files per environment.


$conf['stage_file_proxy_origin'] = 'http://www.yoursitename.com'; // no trailing slash

Stage File Proxy has great documentation, and if Drush, or settings files aren't your thing, you can access configuration which can also be set in the admin UI at /config/system/stage_file_proxy, or Admin Menu > Configuration > System > Stage File Proxy.

Stage File Proxy Settings

This module is for use on your local, development, and staging servers, and should be used by anyone that works in multiple environments.  It should be disabled on your production/live site.

Jul 01 2015
Jul 01

After I posted a case study last week I had a number of readers ask me if they could try a demo and see how it works. There is no try-out demo yet but in the meanwhile I produced a video that demonstrates the basic controls:

[embedded content]

If you have any questions about integration and the open source library that powers it feel free to contact or comment!

Jul 01 2015
Jul 01
Wednesday, July 1, 2015 - 11:53

I did another video the other day. This time I've got a D7 and D8 install open side by side, and compare the process of adding an article.

Jul 01 2015
Jul 01

Preface

This blog post is for developers, not site builders, as the analysis for cache debugging requires knowledge about the runtime stack of Drupal.

The Problem with Caching in Drupal 7

To obtain performance in Drupal 7, Drupal relies heavily on caching. That is, to process something and cache the end result so that same work doesn’t need to be repeated. Conditions also have to be created for when that cache expires or is invalidated. Drupal has a caching layer to help with this. When you want to store something in cache, you use cache_set, to retrieve it, you use cache_get and to wipe the cache bin clean, you can use cache_clear_all.

Often, modules can implicitly clear or set cache unintentionally. This can lead to more caching overhead than you need. For example, theme registry clearing, use of the variable_set function or calls to other modules that call cache_clear_all. The problem is, how do you track down culprits to fix the issue?

Enter Cache Debug

Cache Debug is a module that wraps around the caching layer and adds logging. Including stacktrace information. It means when a cache_set or cache_clear_all is called, you can trace back to what called it - understand the problem and fix it. Very quickly.

It comes with three logging options:

  • watchdog - good if you’re using syslog module but deadly if you’re using dblog.
  • error_log - logs to your php error log
  • arbitrary file - specify your own log file to log to

Configuring Cache Debug

Because the caching system is so highly utilized, cache logging can be incredibly verbose. Perhaps this is why there is no logging around this in Drupal core. Fortunately, Cache Debug is highly configurable to control what to log.

NOTE: Because the caching system is loaded and used before Drupal’s variable system which manages configuration, it is best to set configuration in settings.php rather than in the database. However, there is a web UI that does set configuration in the database for ease of use.

Basic configuration

If you’ve used the memcache module before, this should feel familiar. In order to use Cache Debug, you need to set it as the cache handler:

<?php
$conf['cache_backends'][] = 'sites/all/modules/cache_debug/cache_debug.inc';
$conf['cache_default_class'] = 'DrupalDebugCache';
?>

This tells Drupal that there is a cache backend located in the path provided (make sure its correct for your Drupal site!) and that the default class for all cache bins is the DrupalDebugCache class. If you only want to monitor a single bin you may want to omit this option.

Since Cache Debug is a logger and not an actual caching system, it needs to pass cache requests onto a real cache system. By default, Debug Cache will use Drupal core’s Database Cache system for cache storage, but if you’re using memcache, redis or similar, you may want to set that as the handler for Cache Debug:

<?php
$conf['cache_debug_class'] = 'MemCacheDrupal';
$conf['cache_cache_debug_form'] = 'DrupalDatabaseCache';
?>

You need to also configure those modules accordingly.

At this point, you’ll be logging all cache calls and stack traces to set and clear calls to the php error log.

Configure the logging location

You may want to choose your own logging location. For example, if you use dblog, then you won’t want to log to watchdog because it will bloat your database. Likewise, if you don’t want to bloat our php error log, then you may want to log to an arbitrary file. You can choose your logging location by setting cache_debug_logging_destination to error_log (default), watchdog or file. For file you will also need to provide the location:

<?php
$conf['cache_debug_logging_destination'] = 'file';
$conf['cache_debug_log_filepath'] = '/tmp/cachedebug.log';
?>

Configuring logging options

You can choose to log calls to cache get, getMulti, set and clear. You can also choose to log a stacktrace of these calls to show the stack that triggered the call. This is most useful for calls to SET and CLEAR. For a minimal logging option with the most about of insight, you might want to try this:

<?php
$conf['cache_debug_log_get'] = FALSE;
$conf['cache_debug_log_getMulti'] = FALSE;
$conf['cache_debug_log_set'] = TRUE;
$conf['cache_debug_log_clear'] = TRUE;
$conf['cache_debug_stacktrace_set'] = TRUE;
$conf['cache_debug_stacktrace_clear'] = TRUE;
?>

Logging per cache bin

You don’t have to log the entire caching layer if you know which bin to look at for the caching issue you’re observing. For example, if you’re looking for misuse of variable_set, you only need to log the cache_bootstrap bin. In which case you could do this:

<?php
# Do not log to all cache bins so ensure this line is removed (from above):
# $conf['cache_default_class'] = 'DrupalDebugCache';
$conf['cache_bootstrap_class'] = 'DrupalDebugCache';
?>

Configure for common issues

Variable set calls and theme registry rebuilds are the two most common issues and so Cache Debug has use cases for these issues built in. So long as Cache Debug is the cache handler for the bin, you can turn off logging and turn on these features and Cache Debug will only log when these issues occur:

<?php
$conf['cache_default_class'] = 'DrupalDebugCache';
$conf['cache_debug_common_settings'] = array(
  'variables' => TRUE,
  'theme_registry' => TRUE,
);
// Turn off logging
$conf['cache_debug_log_get'] = FALSE;
$conf['cache_debug_log_getMulti'] = FALSE;
$conf['cache_debug_log_set'] = FALSE;
$conf['cache_debug_log_clear'] = FALSE;
?>

Analysing the logged data

Cache debug logs to a log file like the example below: Example output of cache debug logging

In this snapshot of log output you can see both how cache debug logs cache calls and the stacktracing in action.

Log format structure

A log line starts with a value that describes the cache bin, the cache command and the cache id. E.g. cache_bootstrap->set->variables would bet a cache_set call to the cache_bootstrap cache bin to set the variables cache key. Some calls also log additional data, for example, cache clear also indicates if the call was a wildcard clear. Set calls also log how much data (length) was set.

Stack trace logs

When stack tracing is enabled for specific commands, a stack trace will be logged immediately after the log event that triggered it. The trace rolls back through each function that led to the current cache command being triggered. In the example above you can see that cache_clear_all was called by drupal_theme_rebuild which was called by an include from phptemplate_init. If you look at the source code in phptemplate_init, you’ll see that this means a cache rebuild was triggered from including template.php. In this case it was that Zen base theme had the theme registry rebuild left on.

Jun 30 2015
Jun 30

By Steve Burge 30 June 2015

An OSTraining members took our recommendation and installed the Administration Menu module, which makes it much easier to navigate through your Drupal site.

However, after enabling Administration Menu, they found that they lost their Shortcuts menu. This is normally the grey bar under Drupal's default admin toolbar.

Here's how to use Administration Menu and Shortcuts together.

First, make sure that "Administration menu" is enabled, but also make sure that the "Administration menu Toolbar style" module is enabled:

Using Administration Menu and Shortcuts in Drupal

Now go to Configuration > Administration Menu and you'll now be able to check the "Shortcuts" box, as in the image below. This will re-enable the Shortcuts menu.

If you don't see the Shortcuts menu immediately, try clicking the down arrow in the top-right corner:

Jun 30 2015
Jun 30

A robust Continuous Integration system with good test coverage is the best way to ensure that your project remains maintainable; it is also a great opportunity to enhance your development workflow with Composer. Composer is a dependency management system that collects and organizes all of the software that your project needs in order to run. 

Using Composer to manage the modules and themes used in your Drupal site offers many benefits. Your project Git repository remains very light, containing only the files customized for your project, and your autoload file is managed for you, making it easy to start using code written with generic php classes, either custom to your project, or provided via Packagist. In order to take advantage of these capabilities of Composer, though, it is necessary to add a build step to your development workflow. Composer will download all of the components needed for your project and generate your autoload file when you run composer install or composer update. However, it requires a bit of planning to figure out how this step should integrate into your existing processes. Which components should be committed to Git, and which should be ignored? How does this fit into the Dev / Test / Live workflow?

The Travis Continuous Integration service offers a framework wherein these questions can be resolved. Travis is just one of many services that provides the capability to automatically rebuild and test your software every time a change is committed to your repository. Travis is popular because it is easy to configure via a .travis.yml file committed to the root of your repository, and it is free to test public GitHub repositories. Getting started with Composer, Behat and Drupal using Travis CI is now easier than ever—just enter the following composer command:

composer create-project pantheon-systems/example-drupal7-composer my-project

This will clone the example-drupal7-composer project from our GitHub repository, set up your root project files, and commit it all to a local Git repository. From there, all you need to do is customize these files to suit your Drupal project.

Existing Drupal sites can be converted with the help of the Drush composer-generate command, which can quickly populate the requires section of your composer.json file. The rest of the steps are clearly documented in the pantheon-systems/travis-scripts project.

Once you follow these instructions, you’ll have a Drupal project that:

  • Does not store any core or contrib project files in your repository

  • Builds via composer install in Travis CI on every commit

  • Tests via Behat after every successful build

  • Pushes to a prepared Pantheon environment after every successful test

The last step is completely optional; we’ll be covering this in a future blog post. The rest of the steps will work fine, even if you haven’t set up a free Pantheon for Agencies account. If you are on Pantheon, though, your site will be ready to be deployed to the Test and Live environments with the click of a button, as soon as it is ready.

If you are not very familiar with the concept of using Composer to manage your Drupal projects, see the Composer Tools and Frameworks presentation I did with Doug Dobrzynski at DrupalCon LA.

Topics Development, Drupal Planet, Drupal
Jun 30 2015
Jun 30

image that illustrates how caching to improve drupal performance

In our continuing mission (well, not a mission; it’s actually a blog series) to help you improve your Drupal website, let’s look at the power of caching.

In our previous post, we debunked some too common Drupal performance advice. This time we're going positive, with a simple, rock-solid strategy to get you started: caching is the single best way to improve Drupal performance without having to fiddle with code.

At the basic level, it is easy enough for a non-technical user to implement. Advanced caching techniques might require some coding experience, but for most users, basic caching alone will bring about drastic performance improvements.

Caching in Drupal happens at three separate levels: application, component, and page. Let’s review each level in detail.

Application-level caching

This is the caching capability baked right into Drupal. You won't see it in action unless you dig deep into Drupal's internal code. It is enabled by default and won't ever show older, cached pages.

With application-level caching, Drupal essentially stores cached pages separately from the site content (which goes into the database). You can't really configure this, except for telling Drupal where to save cached pages explicitly. You might see improved performance if you use Memcached on cached pages, but the effect is not big enough to warrant the effort.

Drupal stores many of its internal data and structures in efficient ways to improve frequent access when application-level caching. This isn’t information that a site visitor will see per se, but it is critical for constructing any page. Basically, the only enhancements that can be made at this level are improving where this cached information is stored, like using Memcached instead of the database.

You just need to install Drupal and let the software take care of caching at the application-level.

Component-level caching

This works on user-facing components such as blocks, panels, and views. For example, you might have a website with constantly changing content but a single block remains the same. In fact, you may have the same block spread across dozens of pages. Caching it can result in big performance improvements.

Component-level caching is usually disabled by default, though you can turn it on with some simple configuration changes. For the best results, identify blocks, panels, and views that remain the same across your site, and then cache them aggressively. You will see strong speedups for authenticated users.

Page-level caching

This is exactly what it sounds like: The entire page is cached, stored and delivered to a user. This is the most efficient type of caching. Instead of generating pages dynamically with Drupal bootstrap, your server can show static HTML pages to users instead. Site performance will improve almost immeasurably.

Page-level caching gives you a lot of room to customize. You can use any number of caching servers, including Varnish, which we use at Acquia Cloud. You can also use CDNs like Akamai, Fastly, or CloudFlare to deliver cached pages from servers close to the user's location. With CDNs, you are literally bringing your site closer to your users.

Keep in mind that forced, page-level caching works only for anonymous users by default. Fortunately, this forms the bulk of traffic to any website.

It bears repeating: Caching should be your top priority for boosting Drupal performance. By identifying and caching commonly repeated components and using a CDN at page-level, you’ll see site speed improvements that you can write home about.

Next time: How to Evaluate Drupal Modules for Performance Optimization.

Jun 30 2015
Jun 30

wildebeests migrating

Hi there. I’m Adam from Acquia. And I want YOU to adopt Drupal 8!

I’ve been working on this for months. Last year, as an Acquia intern, I wrote the Drupal Module Upgrader to help people upgrade their code from Drupal 7 (D7) to Drupal 8 (D8). And now, again as an Acquia intern, I’m working to provide Drupal core with a robust migration path for your content and configuration from D6 and D7 to Drupal 8. I’m a full-service intern!

The good news is that Drupal core already includes the migration path from D6 to D8. The bad news is that the (arguably more important) migration path from D7 to D8 is quite incomplete, and Drupal 8 inches closer with each passing day. That’s why I want -- nay, need -- your help.

We need to get this upgrade path done.

If you want core commits with your name on them (and why wouldn’t you?), this a great way to get some, regardless of your experience level. Noob, greybeard, or somewhere in between, there is a way for you to help. (Besides, the greybeards are busy fixing critical issues.)

What’s this about?

Have you ever tried to make major changes to a Drupal site using update.php and a few update_N hooks? If you haven’t, consider yourself lucky; it’s a rapid descent into hell. Update hooks are hard to test, and any number of things can go wrong while running them. They’re not adaptable or flexible. There’s no configurability -- you just run update.php and hope for the best. And if you’ve got an enormous site with hundreds of thousands of nodes or users, you’ll be staring anxiously at that progress bar all night. So if the idea of upgrading an entire Drupal site in a single function terrifies you, congratulations: you’re sane.

No, when it comes to upgrading a full Drupal site, hook_update_N() is the wrong tool for the job. It’s only meant for making relatively minor modifications to the database. Greater complexity demands something a lot more powerful.

The Migrate API is that something. This well-known contrib module has everything you need to perform complex migrations. It can migrate content from virtually anything (WordPress, XML, CSV, or even a Drupal site) into Drupal. It’s flexible. It’s extensible. And it’s in Drupal 8 core. Okay, not quite -- the API layer has been ported into core, but the UI and extras provided by the Drupal 7 Migrate module are in a (currently sandboxed) contrib module called Migrate Plus.

Also in core is a new module called Migrate Drupal, which uses the Migrate API to provide upgrade paths from Drupal 6 and 7. This is the module that new Drupal 8 users will use to move their old content and configuration into Drupal 8.

At the time of this writing, Migrate Drupal contains a migration path for Drupal 6 to Drupal 8, and it’s robust and solid thanks to the hard work of many contributors. It was built before the Drupal 7 migration path because Drupal 6 security support will be dropped not long after Drupal 8 is released. It covers just about all bases -- it migrates your content into Drupal 8, along with your CCK fields (and their values). It also migrates your site’s configuration into Drupal 8, right down to configuration variables, field widget and formatter settings, and many other useful tidbits that together comprise a complete Drupal 6 site.

Here’s a (rather old) demo video by @benjy, one of the main developers of the Drupal 6 migration path:

[embedded content]

Awesome, yes? I think so. Which brings me to what Migrate Drupal doesn’t yet have -- a complete upgrade path from Drupal 7 to Drupal 8. We’re absolutely going to need one. It’s critical if we’re going to get people onto Drupal 8!

This is where you come in. The Drupal 7 migration path is one of the best places to contribute to Drupal core, even at this late stage of the game. The D7 upgrade path has been mapped out in a meta-issue on drupal.org, and a large chunk of it is appropriate for novice contributors!

Working on the Migrate API involves writing migrations, which are YAML files (if you’re not familiar with YAML, the smart money says that you will pick it up in, honestly, thirty seconds flat). You’ll also write automated tests, and maybe a plugin or two -- a crucial skill when it comes to programming Drupal 8! If you’re a developer, contributing migrations is a gentle, very useful way to prepare for D8.

A very, very quick overview of how this works

Migrations are a lot simpler than they look. A migration is a piece of configuration, like a View or a site slogan. It lives in a YAML file.

Migrations have three parts: the source plugin, the processing pipeline, and the destination plugin. The source plugin is responsible for reading rows from some source, like a Drupal 7 database or a CSV file. The processing pipeline defines how each field in each row will be massaged, tweaked, and transformed into a value that is appropriate for the destination. Then the destination plugin takes the processed row and saves it somewhere -- for example, as a node or a user.

There’s more to it, of course, but that’s the gist. All migrations follow this source-process-destination flow.


id: d6_url_alias
label: Drupal 6 URL aliases
migration_tags:
  - Drupal 6

# The source plugin is an object which will read the Drupal 6
# database directly and return an iterator over the rows of the
# {url_alias} table.
source:
  plugin: d6_url_alias

# Define how each field in the source row is mapped into the destination.
# Each field can go through a “pipeline”, which is just a chain of plugins
# that transform the original value into the destination value, one step at
# a time. Source values can go through any number of transformations
# before being added to the destination row. In this case, there are no
# transformations -- it's just direct mapping.
process:
  source: src
  alias: dst
  langcode: language

# The destination row will be saved by the url_alias destination plugin, which
# knows how to create URL aliases. There are many other destination plugins,
# including ones to create content entities (nodes, users, terms, etc.) and
# configuration (fields, display settings, etc.)
destination:
  plugin: url_alias

# Migrations can depend on specific modules, configuration entities, or even
# other migrations. 
dependencies:
  module:
    - migrate_drupal

I <3 this, how can I help?

The first thing to look at is the Drupal 7 meta-issue. It divvies up the Drupal 7 upgrade path by module, and divides them further by priority. The low-priority ones are reasonably easy, so if you’re new, you should grab one of those and start hacking on it. (Hint: migrating variables to configuration is the easiest kind of migration to write, and there are plenty of examples.) The core Migrate API is well-documented too.

If you need help, we’ve got a dedicated IRC channel (#drupal-migrate). I’m phenaproxima, and I’m one of several nice people who will be happy to help you with any questions you’ve got.

If you’re not a developer, you can still contribute. Do you have a Drupal 6 site? Migrate it to Drupal 8, and see what happens! Then tell us how it went, and include any unexpected weirdness so we can bust bugs. As the Drupal 7 upgrade path shapes up, you can do the same thing on your Drupal 7 site.

If you want to learn about meatier, more complicated issues, the core Migrate team meets every week in a Google Hangout-on-air, to talk about larger problems and overarching goals. But if you’d rather focus on simpler things, don’t worry about it. :)

And with that, my fellow Drupalist(a)s, I invite you to step up to the plate. Drupal 8 is an amazing release, and everyone deserves it. Let’s make its adoption widespread. Upgrading has always been one of the major barriers to adopting a new version of Drupal, but the door is open for us to fix that for good. I know you can help.

Besides, core commits look really good with your name tattooed on ‘em. Join us!

Jun 30 2015
Jun 30

 Alphabet Stencil The introduction of Behat 3, and the subsequent release of the Behat Drupal Extension 3, opened up several new features with regards to testing Drupal sites. The concept of test suites, combined with the fact that all contexts are now treated equally, means that a site can have different suites of tests that focus on specific areas of need.

Background

Behat is a PHP framework for implementing Behavior Driven Development (BDD). The aim is to use ubiquitous language to describe value for everybody involved, from the stake-holders to the developers. A quick example:

In order to encourage visitors to become more engaged in the forums
Visitors who choose to post a topic or comment
Will earn a 'Communicator' badge

This is a Behat feature, there need be no magic or structure to this. The goal is to simply and concisely describe a feature of the site that provides true value. In Behat, features are backed up with scenarios. Scenarios are written in Gherkin and are mapped directly to step-definitions which execute against a site, and determine if, indeed, a given scenario is working.

Continuing with the above example:

Scenario: A user posts a comment to an existing topic and earns the communicator badge
  Given a user is viewing a forum topic "Getting started with Behat"
  When they post a comment
  They should immediately see the "Communicator" badge

Each of the Given, When, and Then steps are mapped to code using either regex, or newly in Behat 3, Turnip syntax:

/**
 * Create a forum topic.
 *
 * @Given a user is viewing a forum topic ":topic"
 */
 public function assertUserViewingForumTopic($topic) {
   // Create and log in user.
   $user = (object) ['name' => $this->getRandom()->name()];
   $this->userCreate($user);
   // Create a forum topic titled $topic.
   // ...
 }

While that may look like a lot of custom code, extensions – such as the Mink Extension and the Drupal Extension – provide many pre-built step-definitions and methods for interacting with websites and the Drupal backend. Much has been written about using these extensions, so let’s now move on to some exciting new features available in Behat 3.

Test Suites

Continuing with our example above, one can imagine there are many ways to assert that behavior. In the brief PHP code example, a user is programmatically created. However, the same scenario could also test that a 'register' link exists on the forum comment, and manually create a user through the UI. Of course, such a test would be rather slow (although still valuable to prevent front-end regressions). Behat 3 provides a way to use the same scenario with different backend contexts for evaluating whether the scenario is currently functional. These are called test suites.

Test suites can have different contexts loaded and be tied to different tags:

default:
  suites:
    frontend:
      contexts:
        - FrontEndContext
      filters:
        tags: "frontend"
    backend:
      contexts:
        - ApiContext
      filters:
        tags: "backend"

This defines two suites, 'frontend', and 'backend'. The filter specifies that any scenario tagged with @frontend will be executed by the frontend suite and similarly @backend will be run by the backend suite. Our example scenario above could be tagged with both:

@frontend @backend
Scenario: A user posts a comment to an existing topic and earns the communicator badge
...

When the front-end suite is run, the FrontEndContext class will be used and that could contain all the logic necessary to assert that the frontend is behaving as this scenario expects. When the backend suite is run, the ApiContext will be used and that could test the low-level badge functionality needed to support the frontend. Since these tests will be faster, they could be run much more frequently should test execution time start to bottleneck the project.

No More Singular Context

Unlike Behat 2, there is no more singular context. One or more contexts can be specified for each suite. The only constraint is that no two contexts being used at the same time may provide the same step definition. Thus, in our example, the FrontEndContext and the ApiContext are mutually exclusive, since they both provide the same step-definitions.

The Drupal Extension provides a nice example of splitting out step-definitions into multiple contexts that can be used together as needed:

default:
  suites:
    default:
      contexts:
        - FeatureContext
        - Drupal\DrupalExtension\Context\DrupalContext
        - Drupal\DrupalExtension\Context\MessageContext
        - Drupal\DrupalExtension\Context\MinkContext
        - Drupal\DrupalExtension\Context\MarkupContext

If, for instance, a particular suite doesn't touch the API, then the DrupalContext could be removed from the above example.

This allows for much more specific contexts that can be used as needed, rather than overwhelming test writers with every possible available step-definition.

Removing Barriers to Testing

If all of this seems like a lot of effort to expend on any given project, remove the work needed to get started!

By using a starting kit for projects that provide a Behat framework, and even some simple starting tests, new tests can quickly be added even if the project isn't strictly following BDD principles. Ideally, during the course of the project, new, more specific tests are added and the default generic tests are edited or removed completely.

Image: ©http://www.istockphoto.com/profile/shork

Jun 30 2015
Jun 30

Solr is great! When you have a site even with not so much content and you want to have a full text search, then using Solr as a search engine will improve a lot the speed of the search itself and the accuracy of the results. But, as most of the times happen, all the good things also come with a drawback too. In this case, we talk about a new system which our web application will communicate to. This means that, even if the system is pretty good by default, you have to be able in some cases to understand more deeply how the system works.This means that, besides being able to configure the system, you have to know how you can debug it. We'll see in the following how we can debug the Solr queries which our applications use for searching, but first let’s think of a concrete example when we need to debug a query.

An example use case

Let’s suppose we have 2 items which both contain in the title a specific word (let’s say ‘building’). And we have a list where we show search results ordered by their score first, and when they have equal scores by the creation date, desceding. At a first sight, you would say that, because both of them have the word in the title, they have the same score, so you should see the newest item first. Well, it could be that this is not true, and even if they have the word in the title, the scores are not the same.

Preliminaries

Let’s suppose we have a system which uses Solr as a search server. In order to be able to debug a query, we first have to be able to run it directly on Solr. The easiest is when Solr is accessible via http from your browser. If not, the Solr must be reached from the same server where your application sits, so you call it from there. I will not insist on this thing, if you managed to get the Solr running for you application you should be able to call it.

Getting your results

The next thing you do is to try to make a query with the exact same parameters as your application is doing. To have a concrete example, we will consider here that we have a Drupal site which uses the Search API module with the Apache Solr as the search server. One of the possibilities to get the exact query which is made is to check the SearchApiSolrConnection::makeHttpRequest() method which makes a call to drupal_http_request() using an URL. You could also use the Solr logs to check the query if it is easier. Let's say we search for the word “building”. An example query should look like this:

http://localhost:8983/solr/select?fl=item_id%2Cscore&qf=tm_body%24value%...

If you take that one and run it in the browser, you should see a JSON output with the results, something like:

solr json output

To make it look nicer, you can just remove the “wt=json” (and optionally “json.nl=map”) from your URL, so it becomes something like:

http://localhost:8983/solr/select?fl=item_id%2Cscore&qf=tm_body%24value^5.0&qf=tm_title^13.0&fq=index_id%3A"articles"&fq=hash%3Ao47rod&start=0&rows=10&sort=score desc%2C ds_created desc&q="building"

which should result in a much nicer, xml output:

solr xml output

List some additional fields

So now we have the results from Solr, but all they are containing are the internal item id and the score. Let's add some fields which will help us to see exactly what texts do the items contain. The fields you are probably more interested in are the ones which are in the “qf” variable, in your URL. In this case we have:

qf=tm_body%24value^5.0&qf=tm_title^13.0

which means we are probably interested in the “tm_body%24value” and the “ tm_title” fields. To make them appear in the results, we add them to the “fl” variable, so the URL becomes something like:

http://localhost:8983/solr/select?fl=item_id%2Cscore%2Ctm_body%24value%2...^5.0&qf=tm_title^13.0&fq=index_id%3A%22articles%22&fq=hash%3Ao47rod&start=0&rows=10&sort=score%20desc%2C%20ds_created%20desc&q=%22building%22

And the result should look something like:

solr full xml output

Debug the query

Now everything is ready for the final step in getting the debug information: adding the debug flag. It is very easy to do that, all you have to do is to add the “debugQuery=true” to your URL, which means it will look like this:

http://localhost:8983/solr/select?fl=item_id%2Cscore%2Ctm_body%24value%2...^5.0&qf=tm_title^13.0&fq=index_id%3A%22articles%22&fq=hash%3Ao47rod&start=0&rows=10&sort=score%20desc%2C%20ds_created%20desc&q=%22building%22&debugQuery=true

You should see now more debug information, like how the query is parsed, how much time does it take to run, and probably the most important one, how the score of each result is computed. If your browser does not display the formula in an easy-readable way, you can copy and paste it into a text editor, it should look something like:

solr debug output

As you can see, computing the score of an item is done using a pretty complex formula, with many variables as inputs. A few more details about these variables you can find here: Solr Search Relevancy

Further reading and useful links

Jun 30 2015
Jun 30

Have you ever thought about building your own Software-as-a-Service (SaaS) business based on Drupal? I don't mean selling Drupal as a service but selling your Drupal-based software under a subscription model and using Drupal as the basis for your accounting, administration, deployment and the tool that serves and controls all the business processes of your SaaS business. Yes, you have? That's great! We’ve done the same thing over the last 12 months, and in this blog post I want to share my experiences with you (and we’d be delighted if you shared your experiences in the comments). I’ll show you the components we used to build Drop Guard – a Drupal-auto-updater-as-a-service (DAUaaS ;-)) that includes content delivery and administration, subscription handling, CRM and accounting, all based on ERPAL Platform.

I’m not talking about a full-featured, mature SaaS business yet, but about a start-up in which expense control matters a lot and where agility is one of the most important parameters for driving growth. Of course, there are many services out there for CRM, payment, content, mailings, accounting, etc. But have you added up all the expenses for those individual services, as well as the time and money you need to integrate them properly? And are you sure you’ve made a solid choice for the future? I want to show you how Drupal, as a highly flexible open source application framework, brings (almost) all those features, saves you money in the early stages of your SaaS business and keeps you flexible and agile in the future. Below you’ll find a list of the tools we used to build the components of the Drop Guard service.

Components of a SaaS business application

Content: This is the page where you present all benefits of your service to potential clients. This page is mostly content-driven and provides a list of plans your customers can subscribe to. There’s nothing special about this as Drupal provides you with all the features right out of the box. The strength of Drupal is that it integrates with all the other features listed below, in one system. With the flexible entity structure of Drupal and the Rules module, you can automate your content and mailings to keep users on board during the trail period and convince them of your service to purchase a full subscription.

Trial registration: Once your user has signed up using just her email address, she’ll want to start and test using your service for free during the trial period. Drupal provides this registration feature right out of the box. To deploy your application (if you run single instances for every user), you could trigger the deployment with Rules. With the commerce_license module you can create an x-day trial license entity and replace it with the commercial entity once the user has bought and paid for a license.

Checkout: After the trial period is over, your user needs to either buy the service or quit using it. The process can be just like the checkout process in an online store. This step includes a subscription to a recurring payment provider and the completion of a contact form (to create a complete CRM entry for this subscriber). We used Drupal commerce to build a custom checkout process and commerce products to model the subscription plans. To notify the user about the expiration of her trial period, you can send her one or more emails and encourage her to get in touch. Again, Rules and the flexible entity structure of Drupal work perfectly for this purpose.

Accounting: Your customer data need to be managed in a CRM as they're one of the most valuable information in your SaaS business. If you’ve just started your SaaS business, you don't need a full-featured and expensive CRM system, but one that scales with your business as it grows and can be extended later with additional features, if needed. The first and only required feature is a list of customers (your subscribers) and a list of their orders and related invoices (paid or unpaid). As we use CRM Core to build the CRM, we can extend the contact entities with fields, build filterable lists with views, reference subscriptions (commerce orders) to contacts and create invoices (a bundle of the commerce order entity pre-configured as the ERPAL invoice module).

Recurring payment: If you run your SaaS business on a subscription-based model where your clients pay for the service periodically, you have two options to process recurring payments. Handling payments by yourself is not worth trying as it’s too risky, insecure and expensive. So, either you use Stripe to handle recurring payments for you or you can use any payment provider to process one time payments and implement the recurring feature in Drupal. There are some other SaaS payment services worth looking at. We've chosen the second option using Paymill to process payments in combination with commerce_license and commerce_license_billing to implement the recurring feature. For every client with an active subscription, an invoice is created every month and the amount is charged via the payment provider. Then the invoice is set to "paid" and the service continues. The invoice can be downloaded in the portal and is accessible for both the SaaS operator and the client as a dataset and/or a PDF file.

Deployment: Without going into deep details of application deployment, Docker is a powerful tool for deploying single-instance apps for your clients. You may also want to have a look at different API-based Drupal hosting platforms, such as Platform.sh or Pantheon or Acquia Cloud if you want to sell Drupal-based applications via a SaaS model. They will make your deployment very comfortable and easy to integrate. You can use Drupal multi-site instances or the Drupal access system to separate user-related content (the last one can be very tricky and exert performance impacts on big data!). If your app produces a huge amount of data (entities or nodes) I recommend single instances with Docker or a Drupal hosting platform. As Drop Guard automates deployment and therefore doesn’t produce that much data, we manage all our subscribers in one Drupal instance but keep the decoupled update server horizontally scalable.

Start building your own SaaS business

If you’re considering building your own SaaS business, there’s no need to start from scratch. ERPAL Platform is freely available, easy-to-customize and uses Drupal contrib modules such as Commerce, CRM Core and Rules to connect all the components necessary to operate a SaaS business process. With ERPAL Platform you have a tool for developing your SaaS business in an agile way, and you can adapt it to whatever comes in the near future. ERPAL Platform includes all the components for CRM and accounting and integrates nicely with Stripe (and many others, thanks to Drupal Commerce) as well as your (recurring) payment provider. We can modify the default behavior with entities, fields, rules and views to extend the SaaS business platform. We used several contrib modules to extend ERPAL Platform to manage licensed products (commerce license and commerce license billing). If you want more information about the core concepts of ERPAL Platform, there’s a previous blog post about how to build flexible business applications with ERPAL Platform.

This is how we built Drop Guard, a service for automating Drupal updates with integration into development and deployment workflows. As we’ve just started our SaaS business, we’ll keep you posted with updates along our way to becoming a full-fledged, Drupal-based SaaS business. For instance, we plan to add metrics and marketing automation features to drive traffic. We’ll share our experiences with you here and we’d be happy if you’d share yours in the comments!

Jun 30 2015
Jun 30

Queries are the centerpiece of MySQL and they have high optimization potential (in conjunction with indexes). This is specially true for big databases (whatever big means). Modern PHP frameworks tend to execute dozens of queries. Thus, as a first step, it is required to know what the slow queries are. A built-in solution for that is the MySQL slow query log. This can either be activated in my.cnf or dynamically with the --slow_query_log option. In both cases, long_query_time should be reduced to an appropriate value. Most Linux distributions come up with a default value of 1 or more seconds. But this turns out too slow for web applications as you want to achieve an overall response time of a few hundreds of milliseconds. So depending on your needs of performance choose a value of 0.1 or 0.01 seconds.

SQL consists of 2 different types of queries: Those who belong to the Data definition language (DDL) and those who are working with data (Data manipulation language, DML). DDL queries have usually no performance implications. But there is an exception of this rule of thumb: ALTER TABLE statements can be very time-consuming, if a table contains millions of records and uses (unique) indexes. We will cover a practice in a minute. DML queries again can be divided into INSERT statements and other CRUD statements (SELECT, UPDATE and DELETE) on the other hand. Those statements can be optimized with several techniques. Most of this blog post will address this type of statement.

Optimizing ALTER TABLE statements

Imagine you have an accounts table with millions of records and you want to extend it with a field for a phone number. A direct execution of ALTER TABLE would certainly lead to major load. The trick is to avoid index ad-hoc re-calculation. Hence, we drop all indexes and copy the table to an extra table and perform structural changes there.

  1. Set innodb_buffer_pool_size appropriately (Be aware: For performing structural changes, a high buffer pool size can speed up things; Being live however, a high size will lead to memory shortages)
  2. (Optional) Backup the database
  3. Drop all indexes except primary key and foreign keys
    DROP index ...
  4. 4. Copy the table and apply structural changes. Use a similar name, for example with suffix '_new'.

    CREATE TABLE IF NOT EXISTS `Accounts_new`
      id` int(11) NOT NULL AUTO_INCREMENT,
      `email` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
      `city` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
      PRIMARY KEY (`id`),
      ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=1 ;
      ALTER TABLE `Accounts_new` ADD `phone` VARCHAR(255 ) NOT NULL;

  5. Copy data with INSERT INTO ... SELECT. Just select the columns that are used in the new table.
    INSERT INTO Accounts_new SELECT `id`, `email`,  `city`, null FROM Accounts;
  6. Rename the table. In case of used foreign keys disable the foreign key check.

    SET foreign_key_checks = 0;
      DROP TABLE Accounts;
      ALTER TABLE Accounts_new RENAME Accounts;
      SET foreign_key_checks = 1;

  7. Create all indexes including foreign keys.
    CREATE index ...

Two steps require major efforts. First, copying all the data to the new table will take some time; Second, rebuilding all indexes can last a long time (it depends on the number of indexes and whether they are unique or not).


Optimizing insertions

INSERT queries should be merged, if possible. A single query that creates 10 rows is faster than 10 sole queries. However, this technique has its limits, especially, if MySQL runs out of memory. If you want to import a whole database, then you can switch off some consistency checks, for example foreign_key_checks=0, unique_checks=0. Moreover, autocommit=0 can also help.


Optimizing SELECT statements

SELECT, UPDATE and DELETE statements have one thing in common: It is the way they filter results (with the WHERE clause). This can turn out as a complex task, especially for big tables. Big means tables having a row count from 100 000. Tables having more than one million rows should definitely be included into query optimization. For the sake of simplicity, we concentrate on SELECT queries. It is the most frequently used case anyway.


1) Use EXPLAIN

If you want to optimize your query, you should know how MySQL executes it. You can use EXPLAIN to get the query execution plan. With MySQL Version 5.6 it is possible to use explain for insert, update and delete statements.

EXPLAIN SELECT * FROM Users WHERE uid = 1;

The result contains several useful informations about the query:


column Description select_type Is the query a simple query (primary) or is it a compounded query (join or subquery)? type This is extremely important for joins or subqueries: How is this query joined? The best types are: const, ref, eq_ref. Worse types are: range, index, all. Attention: do not mix up index with ref/eq_ref! For further informations, please visit the MySQL docs. possible keys A list of indexes which could be used to optimize the speed of the query. key The used index key_len The length of the index. Shorter indexes tend to perform better. ref Which column is used for the index scan? rows Estimated number of rows that have to be compared with the filter criteria. This number should be as low as possible. extra Additional information about the query. Attention: Do not mix up Using index with ref/eq_ref!

MySQL docs: http://dev.mysql.com/doc/refman/5.0/en/explain-output.html

If the query is a simple query (i.e. no joins or subqueries are used), then EXPLAIN will return a single line where select_type is set to SIMPLE. To get a good performance, it is important to use an existing index. This is the case when type is equal to ref and possible_keys and key suggest an index.

If joins are used, the returned result will contain a line per table. Joining tables should always be done by a foreign key comparison. In this case the type of an EXPLAIN is eq_ref. Avoid to leave out foreign keys. Try to avoid joins on different attribute types, for instance a varchar field and an integer field. This will make MySQL do a lot of type conversions which is simply not good.


2) Use existing indexes

Indexes are ordered by (at least) an attribute by design. Thus, they can be applied to queries which are filtering by this attribute, either as exact filter (WHERE x = 'y') or as range query (WHERE timestamp >= 123). Indexes are not applicable if you use any function in the WHERE clause, for instance WHERE SUBSTR(name, 0, 4) = 'Alex'. The following list shows which WHERE clauses can be handled by indexes:

WHERE x = 'y' check.png

WHERE timestamp >= 123 check.png

WHERE timestamp BETWEEN 123 AND 456 check.png

WHERE name LIKE ('Ale%') check.png

WHERE name LIKE ('%Ale%') error.png

WHERE SUBSTR(name, 0, 4) = 'Alex' error.png

If you have more than one filter criterion in the query, your index should include all used columns as well. Imagine you have the following indexes: name_IDX, firstname_IDX, firstname_name_IDX and name_firstname_IDX. Then the query

# Using composite indexes
SELECT * FROM Users WHERE firstname = 'Alex' AND name = 'Saal'

... could be optimized with firstname_IDX, firstname_name_IDX but not with name_firstname_IDX because of the order of the columns! The order has to be the same in the query as well as in the index. It is like using a telephone book. A telephone book is ordered by last name, then by first name. It is much more easy to first look for all persons with the desired last name and have a list with only a few persons. It does not make sense at all to browse the whole telephone book looking for people with a wanted first name and then comparing the last name in step 2.

Keeping this image in mind: It is always good to have a selective index. You can use an index which includes a gender of a customer. But this reduces the data set only by a half. Instead, it is much more pleasant to have an index like e-mail address or a unique index like Social Security Number. Be selective! As a rule of thumb, there are 3 levels of selectivity:

  • Primary key or unique key (best; those clauses will return a single row immediately)
  • An index matching the WHERE clause, or a prefix index (useful for text fields)
  • No key or index is applicable (worst)

Furthermore, firstname_name_IDX matches better than firstname_IDX and will be preferred by MySQL. Note that firstname_name_IDX can also be used for queries like

# Filtering the first name
SELECT * FROM Users WHERE firstname = 'Alex'

It is therefore neither necessary nor recommended having both indexes created simultaneously.

The indexes are always read from left to the right. If you have an index containing multiple columns - index names (column_firstname, column_familyname) - the order of your filters in the query should follow the same order. Otherwise the index can not be used. So if you filter without using the first column (column_firstname is not used in the query) in the index, but assuming that the filter is also used by just filtering for the second column (column_familyname) in the index, the index is not used. Therefore it is sometimes better to add a second index using just the second column. Check the statement by using EXPLAIN to check which index is used or not. For examples see the chapter about table indexes below.


3) Or Statements

The mysql query optimizer can not use indexes if the OR statement is used, so try to avoid OR statements!


4) Optimization of GROUP BY/ORDER BY queries

Sometimes you are facing queries that aggregate or sort rows:

# GROUP BY / ORDER BY
SELECT role, count(*) FROM Users WHERE registration_ts > 140000000 GROUP BY role;
SELECT id, username FROM Users WHERE registration_ts > 140000000 ORDER BY username;

What MySQL does is:

  1. Selection of Users by WHERE registration_ts > 140000000,
  2. Order results of step 1 (no matter if GROUP BY role or ORDER BY username is used)
  3. Projection to the desired columns by SELECT role or SELECT id, username

The hardest step is sorting. This is where indexes can help a lot. They contain a sorted list of records dependent to their definition. This is extremely helpful in particular if you have a lot of data in that table (Complexity of sorting algorithms is O(n*log(n))). How to define the index to optimize this query? Choose first the column filtered in the WHERE clause, then those in GROUP BY/ORDER BY (in the same order as in the query!). If it is possible to add the columns of SELECT to the index (after the columns of GROUP BY/ORDER BY) to gain some performance (this technique is called covering index). It is not always reasonable to use covering indexes: If the whole index gets too big, then you probably won't gain any time.

Extending the example of a telephone book: It is helpful, if you have requests like "Tell me how many persons have the last name 'Smith'" (This is a GROUP BY) or "Give me a list of all persons ordered by last name and first name" (ORDER BY).

In the previous example use the following indexes:

  • registration_role_IDX for the GROUP BY statement
  • registration_username_IDX for the ORDER BY statement

5) Usage of subqueries

When it comes to complex queries, MySQL (especially before 5.6) is optimized for using JOIN statements. However, in some cases a subquery can be more efficient if you use both GROUP BY and ORDER BY on different tables. In that case, an index cannot be used, if you join the tables. Defining a main query and subquery avoids this problem, as each query acts on its own table and is able to use any available index.

# Case A: Query as INNER JOIN
SELECT
    a.id AS account_id,
    p.id AS product_id,
    TRIM(SUBSTRING(p.name, 1, 30)) AS product_name,
    COUNT(*) AS count
FROM Accounts a
INNER JOIN Orders o ON a.id = o.account_id
INNER JOIN Products p ON p.id = o.product_id
GROUP BY p.id
ORDER BY a.id

# Case B: Subquery
SELECT account_id, product_id, product_name, count
FROM (SELECT
    a.id AS account_id,
    p.id AS product_id,
    TRIM(SUBSTRING(p.name, 1, 30)) AS product_name,
    COUNT(*) AS count
  FROM Accounts a
  INNER JOIN Orders o ON a.id = o.account_id
  INNER JOIN Products p ON p.id = o.product_id
  GROUP BY p.id) as product
ORDER BY account_id

In that case, the query has been split up to an outer query and a subquery (line 2-10). Case A would make MySQL create a temporary table and use filesort. Case B can avoid that. It depends on the size of each table, which way is superior.

Other MySQL blog posts

Jun 30 2015
Jun 30
*/ ]]]]> /*-->*/

Ever Since we moved to Slack for our team’s instant messaging needs, what excited me the most is the nice set of APIs that Slack offers to seamlessly integrate your apps into Slack chat.

What we needed immediately was a basic bot that handled Karma functionality allowing people to say ‘Thanks’ to others using the usual ‘++’ annotation.

We were looking at options for various technologies. Node.js is the usual one that you hear a lot when people talk of chatbots these days.

Drupal was an option. We were skeptical at first. Having Drupal intercept and analyze almost 3- 4 messages every second at its peak hours sounded like trouble. And there would be no levels of caching involved here as each message is unique from a different user and should be processed uniquely.

But one clear advantage I could see and wanted to have, was to interface the rest of the team from the various complexities of Slack APIs and configuration, interfacing all of that through Drupal APIs. So if anyone needed to extend this bot further, they should not really worry about Slack API and they should be building very simple Drupal Modules.

Over a weekend, a couple of us teamed to build a chatbot for our Acquia India slack chatrooms, using what we knew best - Drupal. And we launched it on March 1st 2015. Our bot was christened Regina Cassandra. And the bot has been up and running ever since with no downtime or any issues so far.

The Karma Handling..

Rarely used, but it can text people..

And when the Cricket World cup was happening, Regina was busy serving us scores whenever requested..

Regina also used to give everyone a daily fortune cookie. She doesn’t seem to do that anymore for the API that the fortune cookie module was using seems to be dead now.

The bot uses Slack’s Outgoing Webhooks to intercept each message posted to the chatrooms, and allows all modules on the chatbot site to intercept the message and respond to it.

The bot (a Headless Drupal Site) has been hosted on a free-tier Acquia Cloud Subscription. With the decent performance it has had so far and less-than-a-second response times we currently see with the slackbot, we never saw a need to upgrade so far.

Jun 30 2015
Jun 30

Much ink has been spilled about which open-source ecommerce platform is the “best.” Most comparisons perpetuate what is typically an easy (but usually incorrect) way of understanding these two platforms and whether they are a fit for your business. They are often compared like word processors based on line item comparisons of features instead of powerful business growth engines and visible brand extensions that they are. To limit them to nothing more than published feature sets or architectural comparisons is foolish, unhelpful, and often leads companies down the wrong path. A better approach is to fully understand current and future business requirements and make a decision based on which solution can serve those needs the best.

At a high level, the most important thing to ask is “do you know what you want and how you want it done?” If you don’t know what you want, then you will likely consider a tool with lots of features out of the box. The tradeoff is that those features come with assumptions that are set in stone. While lots of prepackaged features may feel good now, you risk not being able to adapt as quickly as your competitors or the possibility that modifying those features will lead to incompatibilities down the road. The alternative is a framework where you get a larger feature set and with fewer assumptions. The tradeoff here is that you have more work to do to get off the ground—planning and implementing the exact features and experience you want—but with endless flexibility to mold a solution that exactly meets current and future business requirements. Trying to frame these solutions by purely quantitative just won’t do.

Let’s take a step back from the deeply rooted (and borderline religious) discussion of frameworks and function sets, and examine at a higher level both Drupal Commerce and Magento. For business owners who are trying to figure out what’s best for them and anyone who has any experience with either technology, let’s talk about what really makes Drupal Commerce different from Magento.  Let’s get away from discussions about classes, architecture, benchmarks, features, etc. and instead, talk about each solution and objectively what problems they solve and which they do not.

To start off, I’d like to restate a quote (attribute to Adobe SE leads) from Bryan House’s “Competing with Giants” presentation from DrupalCon Denver:

If you are looking at both (Adobe) CQ5 and Drupal, then one of us is in the wrong place.

This quote struck me. It sank deep into my soul. In a way, once I let the weight of these words really take hold, it completely changed my way of thinking. To help, consider this slight rewording:

If you are looking at both Magento and Drupal Commerce, then one of us is in the wrong place.

The obvious implication of this statement is that both Magento and Drupal Commerce have unique roles in the online commerce ecosystem. They are each geared towards certain types of projects and use cases. Instead of pitting each platform against each other to have a winner based on some arbitrary set of features or architecture, a better approach would be to first establish a clear understanding of customer needs. When the needs of a client are properly applied to the strengths of each platform, one will clearly meet those needs in a way that the other does not. Thus removing the need for a feature comparison.

Framing the solutions

What I’d like to endeavor here is (as much as possible) an unbiased and systematic approach to discussing Drupal + Drupal Commerce and Magento as unique solutions to the question of “which commerce platform should I choose?” Keeping the internals aside, here are the particular use-cases that make a lot of sense for a given project. This isn’t a comprehensive list, but if you’re trying to figure out which platform you should be looking at, then take a look. If you find one column aligning with your particular needs—chances are that’s the one that will be a better fit for your business.

Drupal Commerce Magento Content strategy various types of content with rich relationships, taxonomies, and categories catalog and product content with basic site content or blog Catalog complexity unrestrained catalog creation and product presentation options conventional catalog format and product presentation Product offering non-traditional and mixed-product offerings traditional physical and/or digital product offerings Platform functionality open, flexible feature set and custom application foundation commerce-focused feature set Admin interface basic yet, customizable admin interface robust, rigid admin interface User experience strong, defined vision for bespoke user experience best practice, industry standard user experience Business strategy commerce is a part of larger offering or experience commerce is the strategy Development skill level basic PHP knowledge required advanced PHP knowledge required

Now that we’ve drawn some lines, let’s discuss.

Content Strategy

Drupal Commerce (by way of Drupal) has an extremely powerful content system which allows for boundless creation of content types all with their own custom fields and attributes, editing experience, and a set of rich media tools. Content can be related to each other and those relationships can be harnessed to generate lists of related products and blog posts on product pages, or customized landing pages with unique product listings and content. It’s a breeze to set this up and you can do all of this without touching a line of code. If providing content and information to your customers is vital to your business and how you differentiate yourself from others, Drupal is what you want.

Magento, on the other hand, has a very basic content system. You can add pages, add some content to category pages, and adding attributes to products is painless. There are even some great built-in blog modules. But once you step outside of this, you’re in custom territory. You’ll either need two systems (like a blog or a CMS) or you’ll end up building it all custom into Magento increasing cost and ongoing support. Again, it’s not that Magento can’t do content at all, just that Magento’s content features are pretty basic. Enterprise does expand on this, but you still have a very limited tool set and code changes (requiring a developer) are usually required to expand on it.

Catalog Complexity

Magento offers what any reasonable person might consider to be a wholly conventional approach to catalog management. You have a catalog root, and from there you can create tiers of categories. Products fall into one or more of those categories. In fact, it’s pretty common for a product to exist within multiple groups based on how visitors will look for and try to find those particular products. But Magento is also pretty strict that products really can’t be displayed outside of this hierarchy. Aside from some of the product blocks for up-sells and cross-sells, your ability to display products is completely centered around this. Also, product listings are limited to lists and grids views without additional extensions or modifications.

Drupal Commerce releases you from this constraint. Products can be organized, tagged, and dynamically added to or removed from product lists automatically. A traditional catalog-like user experience can be built. But the catalog automatically follows how you already organize your product and can use virtually any attribute that exists on a product. And when you want to display your products, you can choose from a number of pluggable styles from tables, grids, lists, and each product can have it’s own customized look and feel in a product list, too. This can make a huge difference as you try to differentiate, promote, and get your visitors engaged in what you have to offer—no matter how many products you have or how complicated they are.

Product Offering

If you’re selling physical and/or digital products, both platforms are fairly good at that. In fact, Magento again has a lot of features that don’t require individual setup. Want simple and no-fuss sales of traditional products? Magento can tackle that easily. With Drupal Commerce, you start with a basic product structure and are then free to build exactly what you want no matter how complex it might be.

When it comes to non-traditional offerings—event registrations, donations, recurring billing, licensing, and subscription service models—Drupal Commerce provides tools to configure or build what you need without having to reinvent the wheel. And best of all, you can mix any and all of these product types pretty easily. So if you want to do registrations, t-shirt sales, and a recurring monthly donation, you can easily support that in a single site and in a single transaction.

Platform Functionality

Magento has a well implemented and cohesive commerce feature set. And frankly, if you’re judging a product solely on the published feature set, Magento looks good. That’s not because Drupal Commerce doesn’t have a great feature set—in fact it’s much more expansive than Magento’s—but Drupal Commerce’s flexibility is in the expansive and ever-growing list of modules. It’s hard to quantify. If you’re only looking for a shopping cart and you’re happy with what Magento provides, it may very well be the right choice.

However, if you are wanting to integrate features that go beyond commerce—you want to define and build your own custom application or create a totally unique customer experience—then Drupal Commerce will be a much better platform enabling you to adapt quickly to business and market changes.  Entire new areas of functionality can be configured and enabled just  like a new feature.  Whether you’re adding customer portals, forums, web services, an LMS, or even back office functionality, Drupal can give you the agility and freedom to change and grow as you need to.

Admin Interface

While Drupal’s administrative interface can be endlessly customized and tailored to your specific needs (in many cases without even touching the code), it generally tends to be pretty basic. It is trivial to create new screens, targeted to specific users, that gives specific information and actions that can be performed on that information. In short, you can get what you want, but you’ll have to spend the time configuring it.

Magento’s administrative interface is comprehensive and gives users a structured, well-defined way to manage the entire store. If you’re willing to use what’s out of the box, then it will serve you well. The pain will come if you ever decide to deviate from the out of the box experience. Customizations require code modification and even “small changes” could require considerable effort to complete.

User Experience

When it comes to user experience, Magento delivers a best-practices, industry standard implementation of a traditional shopping cart: you get a catalog, a cart with a one-page checkout, account pages, etc. It’s designed to be a finished product, so you can pretty much trust it will all be there and that it will work well.

Drupal Commerce provides all of that same functionality, but expects you to expend some effort to make it look good. At a minimum, you’ll need to theme it. That’s not much to ask since you’re likely already doing that for the rest of your site. Drupal’s theme system is extremely powerful and adding unique or advanced features can be really easy. In some cases, little to no theme work is required. In addition, the user experience for path to purchase can be more easily integrated with the content experience, giving the merchant far more content marketing and merchandizing avenues.

Business Strategy

Drupal is a powerful platform. If you don’t know what I’m talking about, it is something that can’t be explained in a single paragraph. Drupal can be a community form, a wiki, an LMS, a translatable content management platform, a web services platform, and an online store. In fact, it could be all of these things at one time. If your vision calls for a platform that can do more than one thing, Drupal can rise to the challenge and integrate several business platforms under a single (software and hardware) roof.

Magento, no surprise here, is a shopping cart. That’s what it does. It does it well, but if you are wanting to integrate Magento with another part of your business (e.g. magazine subscriptions, forum membership, etc.) you’ll have to deal with two independent systems talking with each other. You’ll be synchronizing your data between multiple systems and having to keep everything up to date with custom or 3rd party solutions.

Development Skills

If you’re wondering how easy it’ll be to integrate your team with either Drupal Commerce or Magento, here’s what you need to know.

Magento is a very powerful and complex system. It’s makes heavy use of objects, inheritance, and programming concepts that are confusing to basic and even some moderately experienced PHP developers. Getting acclimated to Magento as a back-end or front-end developer could take weeks or months, depending on the experience level. Also, architecturally, Magento does have some profound gotchas when it comes to adding and developing many extensions on a site. Documentation is so-so but there is a very active community of bloggers, training is available, and Magento support is pretty widely available.

Drupal Commerce is much simpler and even people with minimal to no PHP experience can customize and pick it up within a few days. While parts of Drupal use objects (such as Views and the entity system) much of it is procedural. Drupal is designed to be much more accessible to individuals without coding experience. This flexibility is made available to non-coders through the various modules (such as Views, Rules, features, VBO, etc.) that offer powerful UIs to manage it. However, when code is necessary, bespoke modules can often be very simple. Documentation is generally very good for things like Drupal and Drupal Commerce, while contributed modules can vary from having non-existent to excellent documentation. Again, a very active and friendly community exists to support Drupal developers and users, and a wide range of training and support is available.

Conclusion

When deciding on an open source ecommerce solution, it is important to first look at the fundamentals of your business and identify your priorities. By doing this you will avoid the needless exercise of features comparisons and checklists and quickly conclude that one of these solutions is in the wrong place.  If content is important to how you engage with customers and sell your product and if you want to control and choose how the solution supports your business needs, Drupal + Drupal Commerce is generally the right choice.

Jun 29 2015
Jun 29

Time flies – it’s already summer, and I hope yours is going well! Seems like just yesterday I was at DrupalCon in Los Angeles, the famous city of movie-making – to make it sound more like a dream… at least my own dream, one that was made true. Because part of our team was invited to LA by an extraordinary company – X-Team.

(Side note: I must say that combining work with travel is a greatly recommended experience, as it brings a breath of fresh air to your usual working process.)

IMG_3092

DrupalCon brings together thousands of people from across the globe who use, develop, design, and support Drupal.

Such an IT event sounds loud. Combined with California and Silicon Valley – sounds even louder. Like a promise that can be hard to deliver on. But it totally was, and with an impressive style. There were a lot of attendees, talks, trainings, huge conference halls and a lot of social events. There was a little something for everyone.

Although it can be pricey to get to one of these, I would definitely attend another, as the value you get from one is incredible. Here’s why:

DrupalCon is inspiring

For me, it’s more about getting inspired, and you certainly get that from the core team’s keynotes (by Dries, Whitney Hess, Matt Assay). Those serve as an example that show how influential people involved in Open Source can be. During such keynotes (but not only!), you can hear a crucial word or two that will have a direct impact on your Drupal research, and will push you forward. And, of course, the future plans for the technology you use will be revealed before you! Learning from strong, inspiring leaders gives you yet another advantage as a Drupal developer.

But getting inspired isn’t limited to just the keynotes – all community collaboration that happens also counts, as it gives you a strong motivation to write high-quality code, work on interesting projects and, of course, share knowledge and contribute to the Open Source movement. And it’s great to see so many people around you who want work with, use and improve technology for the common good.

Drupal already has a really huge and awesome community – one that continuously keeps growing! And conferences certainly play one of the key roles here: every attendee that happens to be putting their very first steps in Drupal’s world can always find support during trainings and even social events like the First time Attendee social. And that works just great, thanks to such a positive atmosphere where people feel very comfortable. So basically, no one should ever be afraid to attend their very first conference!

DrupalCon helps you meet colleagues

Personally, this conference will be remembered also as an opportunity to meet with my teammates (Ardi, Kuba, Sven) in-person. Since we all work remotely, conferences, meetups, camps etc. create a unique occasion to hang out together, get to know each other better, and strengthen bonds.

dsc_0090_1024

And the very same applies to business partners – many of them often attend such events as well, and it’s always great to do a real handshake with them. Not to mention participating in a daily scrum meeting with people from five different countries, sitting at one table…

greetings_from_drupalcon__1024

If you’re going to DrupalCon, be ready to meet people from all over the world, have a lot of fun, discuss advantages and disadvantages of Drupal and any other web novelty, find people for collaboration/research, work partners, and sometimes even a job. And last but not least – you’re bound to make new friends.

Unleash at your next DrupalCon and become part of the community. See you in Barcelona!

IMG_0059--3

Jun 29 2015
Jun 29

One of the most exciting aspects of preparing for a DrupalCon is selecting the sessions that will be presented. It’s always incredibly cool and humbling to see all the great ideas that our community comes up with— and they’re all so great that making the official selections is definitely not an easy process! This time, the Track Chairs had over 500 sessions to read through to determine what content would be presented in Barcelona. A lot of hours go into reading each proposal and in the end, they had to make some hard decisions, but are very excited about the programming we will be offering at the Con.

After all was said and done, we are looking forward to presenting 111 sessions by 123 unique speakers. These speakers are comprised of 78% males and 22% females. Coming from far and wide, we are also excited to announce that a little over 10% of the speakers will be joining us from outside of the Drupal community to share some great industry talks and provide us a different perspective during their sessions.

See the Selected Sessions

Sessions aren't the only thing we've selected: our training opportunities have been chosen as well! To come learn the latest cool Drupal tricks, make sure you sign up to attend one of our eight great training sessions. If you register now, you will receive 50€ off regular prices until 10 July 2015.

Check out a Training

Don't wait to get your ticket!  Earlybird prices are good until 10 July to snag a ticket to hear the great content and attend a training, so register now!

See you in Barcelona!

Jun 29 2015
Jun 29

drupalgovcon logoWe’re excited for Drupal GovCon coming up on July 22nd! We can’t wait to spend time with the Drupal4Gov community and meet fellow Drupalers from all over! Forum One will be presenting sessions in all four tracks: Site Building, Business and Strategy, Code & DevOps, and Front-end, User Experience and Design! Check out our sessions to learn more about Drupal 8 and other topics!

Here our are sessions at a glance…

Nervous about providing support for a new Drupal site? A comprehensive audit will prepare you to take on Drupal sites that weren’t built by you. Join this session and learn from Forum One’s John Brandenburg as he reviews the audit checklist the our team uses before we take over support work for any Drupal site.

Drupal 8’s getting close to launching – do you feel like you need a crash course in what this means? Join Forum One’s Chaz Chumley as he demystifies Drupal 8 for you and teaches you all that you need to know about the world of developers.

If you’re wondering how to prepare your organization for upgrading your sites to Drupal 8, join WETA’s Jess Snyder, along with Forum One’s Andrew Cohen and Chaz Chumley as they answer questions about the available talent, budgets, goals, and more in regards to Drupal 8.

The building blocks of Drupal have changed and now’s the unique time to rethink how to build themes in Drupal 8. Join Chaz Chumley as he dissects a theme and exposes the best practices that we should all be adopting for Drupal 8.

Drupal 8’s first class REST interface opens up a world of opportunities to build interactive applications. Come learn how to connect a Node application to Drupal to create dynamic updates from Forum One’s William Hurley as he demonstrates the capabilities of both JavaScript and Node.js using Drupal, AngularJS, and Sails.js!

Are you excited to launch your new website, but getting held down by all the steps it takes for your code to make it online? On top of that, each change requires the same long process all over again… what a nail biting experience! Join William Hurley as he demonstrates the power of Jenkins and Capistrano for managing continuous integration and deployment using your git repository.

If you’re a beginner who has found the Views module confusing, come check out this session and learn important features of this popular module from Leanne Duca and Forum One’s Onaje Johnston. They’ll also highlight some additional modules that extend the power of Views.

Have you ever felt that Panels, Panelizer and Panopoly were a bit overwhelming? Well, come to our session from Forum One’s Keenan Holloway. He will go over the best features of each one and how they are invaluable tools. Keenan will also give out a handy cheat sheet to remember it all, so make sure to stop by!

Data visualization is the go to right now! Maps, charts, interactive presentations – what tools do you use to build your visual data story? We feel that D3.js is the best tool, so come listen to Keenan Holloway explain why you should be using D3, how to use D3’s visualization techniques, and more.

Implementing modular design early on in any Drupal project will improve your team’s workflow and efficiency! Attend our session to learn from our very own Daniel Ferro on how to use styleguide/prototyping tools like Pattern Lab to increase collaboration between designers, themers, developers, and your organization on Drupal projects.

Are you hoping to mentor new contributors? Check out this session where Forum One’s Kalpana Goel and Cathy Theys from BlackMesh will talk about how to integrate mentoring into all the layers of an open source project and how to develop mentoring into a habit. They’ll be using the Drupal community as an example!

If you’re a beginner looking to set up an image gallery, attend this session! Leanne Duca and Onaje Johnston will guide you in how to set up a gallery in Drupal 8 and how to overcome any challenges you may encounter!

Attend this session and learn how to design and theme Drupal sites using Atomic Design and the Drupal 8 CSS architecture guidelines from our very own Dan Mouyard! He’ll go over our Gesso theme and our version of Pattern Lab and how they allow us to quickly design and prototype reusable design components, layouts, and pages.

Can’t make it to all of the sessions? Don’t worry, you’ll be able to catch us outside of our scheduled sessions! If you want to connect, stop by our table or check us out on Twitter (@ForumOne). We can’t wait to see you at DrupalGovCon!

Previous Post

Programmatically Restricting Access to Drupal Content

Jun 29 2015
Jun 29

OSCON 2015OSCON, the annual open source conference, brings over 4,000 people together in Portland this July. We are a proud participant again this year and we are excited to talk about Drupal to a wider audience. If you are looking for a reason to attend, you can use the code USRG which will get you 20% off your registration. Or you can use the PCEXPOPLUS code to gain admission to the exhibition hall for free.

Taking Drupal to the larger open source world is a big job, and we need our amazing community's help. Help us spread the word that Drupal is at OSCON! If you're attending, please come by and say hi, let your new friends know they can find us in the nonprofit pavilion at table #6 from Tuesday evening through Thursday afternoon. Or, if you know someone who's in open source who will be at OSCON, please encourage them to come by and say hello! Here's a tweet you can share with your networks to help us spread the word:

Tweet: Going to OSCON 2015? Stop by and say hello. We'll be Drupalin' in the expo hall, nonprofit pavilion.

Thanks to the Portland Drupal community for helping out and to everyone for volunteering time at OSCON. If you want to help out by volunteering at the table, we'd love your assistance! You can sign up here.

See you at OSCON!

Jun 29 2015
Jun 29

 the red pill or the blue pill

A couple days ago, I received the first in a series of white papers created by the Drupal Association to help Drupal service providers prepare for (and arguably market) Drupal 8.

The paper briefly touches on a number of topics that highlight the benefits of Drupal 8: better usability, mobile-first design, better performance, easier migration and a new release cycle, multilingual capabilities, REST API built into Core, simpler and more accessible theming layer, and more.

All of these are really great, but none of these are particularly surprising.

The surprise was the acknowledgment that this new version of Drupal, because it aligns itself with current PHP programming standards, will help to thwart the notorious Drupal talent gap – the ongoing insatiable demand and search for quality Drupal developers.

This is most refreshing for a small technology-focused team like ours: Drupal 8 “opens up Drupal development to established OOP PHP developers without major retraining.” This means that it will help us find and to work with good PHP developers who may not be super familiar with Drupal. It will also allow us to re-align our internal practices (historically Drupal-focus) toward a more diverse number of technology options and ultimately provide the best technology match for realizing our clients visions.

Working with Drupal has always somewhat felt like having to choose between the blue and the red pill. No more. Now we can believe whatever we want to believe and stay in Wonderland.

Jun 29 2015
Jun 29

The monthly Drupal core bug fix/feature release window is scheduled for this Wednesday. However, there have not been enough changes to the development version since the last bug fix/feature release two months ago to warrant a new release, so there will be no Drupal core release on that date.

Upcoming release windows include:

  • Wednesday, July 15 (security release window)
  • Wednesday, August 5 (bug fix/feature release window)

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Jun 29 2015
Jun 29

So you want a website. Maybe it is your first website. Maybe you've been here before, but you're starting afresh. You're full of enthusiasm. In your dreams, your website looks like a flashy cruise liner - huge, and with every amenity money can buy. However, your budget stretches to a dinghy with an outboard motor. So how can you rationalise your aspirations within your financial constraints?

You don't have to be a paragon of fiscal rectitude, but you do need to prioritise, and think a little cleverly about how you can approach the project.

Lego boats

Taking the boat analogy a little further, both the cruise ship and the dinghy can arguably meet your needs. Both will keep you out of the water. Both will get you from A to B. Both have the potential for memorable holidays. Sure, the cruise ship has a roof, but you could elect not to go boating in the rain, so the question is: do you need a roof?

Making your website fit your budget is all about prioritisation, and the realisation that not every available feature is actually necessary, or often even a good idea.

Time, cost and quality. These three things go into your project. You don't want to compromise on quality, because that will cause you problems in the future. You don't want to rush it too much because that will lead to mistakes. Your budget is limited, so the only thing that can be a variable is the scope of the project.

An On-going Development Relationship

This is where things get interesting. A website is not just for launch day: it is for the duration of its life. And over that lifetime, we would expect it to see new content, new features, new sections, upgrades, design changes... Users expect to see change when they return to your site, and this empowers you, the site-owner, to embrace the process of change over a longer time. Rather than trying to build a huge monolithic project on day one, why not stagger your build over a longer time, and use it to bring new features to your users as they are ready?

Take, for example, a house. When you buy a house, it is generally considered a poor idea to immediately demolish part of it and build extensions. It takes time to digest, time to experiment, time to learn what elements you like and which you really can't live with. Equally, you need to learn how your needs have changed in your new house and how they are different to your imagined needs before moving in. You'll also want to think about how your house works within its environment - where does the sun come in? are there any problematic times of year?

So too with websites. Just as you'll figure out your needs over time, it pays to keep an eye on what is happening and changing in your industry and the internet at large since you decided to invest in a site.

Start Small, Extend and Prioritize

So, with websites. As you start to use your site you will have ideas and wants and needs. If you have started small, with a view to extending your site, these ideas can be embraced and your site will grow with you as your experience and that of your users illustrates what the needs really are. Change can now be embraced, nurtured and turned into features that both you and your users really love.

The irony is, that in most traditional projects, everyone is asked to do most of the planning, thinking and budgeting at the very outset - which is the point where everyone concerned has the least knowledge about the project, its constraints (both known and hidden) and its goals. This is essentially gambling and is fraught with the danger of failure.

Project knowledge increases over time

An Example Timeline

To illustrate a staged approach to development, here's a timeline for a simple web project.

Day One - Install Drupal and build our first content type: the basic page. We've identified half a dozen pages of content that will describe who we are and what we do. We'll include a FAQ page to releive pressure from staff and some downloadable documents to reduce printing costs. We'll also install Webform to get a contact form on the site. We'll use the core menu system for navigation. We get metatags and Google Analytics to keep on top of our SEO. We'll set up a solid, responsive theme, but will intentionally keep the design very simple so that we can embellish it later. 

Pre-launch - We set up the site on a specialist, tuned, managed Drupal hosting platform and are ready to go.

Launch Day - We go live with a great looking, dependable, flexible, and fast, responsive site. The site is saving us time and money, increasing our visibility and working for us - from the start.

Two months in - We decide that we want a blog, in order to entice visitors and to establish ourselves as thought leaders in our industry. So we add a content type and a views listing (blog listing landing page), and some additional design and styling to make the blog articles look pretty.

Four months in - The blog is going well and we're getting readership traction. We would like to get further traction and extend our reach into social media. We build in share and follow buttons and Twitter cards to drive traffic. We add a block of latest blog articles to the home page.

Six months in - We decide to integrate Mailchimp to let people sign up to our new newsletter, in order to increase our reach and better communicate with our prospective customers. We get a newsletter signup block on the home page and all blog pages. We add some more design flair to our blocks to make them really sing.

Eight months in - We decide to open up a dialog with our readers, adding comments, and of course, spam protection!

Twelve months in -  We launch a range of beautiful products sold through a fully featured, integrated e-commerce solution from our site.

Who knows where we'll go next?

This is an example, but it could be your project. By starting with the bare minimum and extending the site over time, the site owners and developers have full knowledge of the project at all times and importantly, full knowledge of what they like and don't like. Each item of functionality is discrete and self-contained, allowing proper testing and proper scheduling of deployments.

The benefits of such an approach include:

  • Spread the cost of your site build over a longer time for less financial impact
  • Smaller chunks of work allow for defined, short-term goals and milestones
  • Each new feature is the most important feature to the site-owners, right now, so they get what they want quickly
  • Users enjoy a fresh site with each visit
  • The site can react to changing business needs
  • The entire team builds up project knowledge with every new feature, reducing potential for problems and on-boarding overhead, which all means a more efficient build process
  • Better bang for your buck because only the most important items are built and only these items are styled. You never end up styling something that is never used, and equally, you never end up with an un-styled feature because you ran out of budget.

Consider this approach if you:

  • are getting your first site built
  • have limited budget, but grand aspirations
  • are rebuilding your site from the ground up
  • don't have a lot of time to spend writing lots of content or answering endless questions from your developers
  • think your requirements may change, or are at all unsure what they are

When you're ready, we're ready.

If you want to discuss Annertech helping you build an award-winning website, please feel free to contact us by phone on 01 524 0312, by email at [email protected], or using our contact form.

Jun 29 2015
Jun 29

That is the main question. If you came here looking for a definitive answer, I'm afraid you won't find one. What you maybe will find is a discussion I propose on this topic and my two cents on the matter.

Why am I talking about this?

I've been working on a big Drupal website recently which has many contributed and custom modules. One of my recent tasks has been enabling validation of an existent text field used for inputting phone numbers. The client needed this number in a specific format. No problem, a quick regex inside hook_node_validate() should do the trick nicely. But then it got me thinking? Isn't there a module I can use for this instead? Well yes there is: Field validation.

I installed this module which comes with a plethora of validation rules and possibilities. The default phone validation for my country was not matching the client expectation so I had to use a custom expression for that. No problem, achieved basically the same result. But was this the better option under the circumstances? You might say yes because people have been working on this module a long time to perfect it, keep it secure and provide all sorts of goodies under the hood. Not to mention that it's used on over 10.000 websites.

However, the Field Validation module is big. It has a bunch of functionality that allows you to perform all sorts of validation on fields. This is not a bad thing, don't get me wrong. But does my little validation need warrant the installation and loading into memory of so much code? My custom solution was very targeted and took no more than 10 or so lines of code.

One argument would be that yes, because I may need other kinds of validation rules in the future so you can use this module also for those. But I think that being already very far in the lifetime of this website the odds are quite low of that. And even if this is the case, will an extra 2-3 validation needs warrant the use of this module?

On the other hand, you can argue that your custom code is not vetted, is difficult to maintain, can be insecure if you make mistakes and basically represents some non-configurable magic on your site. These can all be true but can also all be false depending on the developer, how they document code and the functionality itself.

I ended up with the custom solution in this case because on this site I really want to introduce new modules only if they bring something major to the table (performance being my concern here). So of course, the choice heavily depends on the actual module you are considering, the website it would go on and the custom code you'd write as an alternative.

Moreover, please do not focus on the actual Field Validation module in this discussion. I am not here to discuss its merits but the whether or not installing any such module that serves a tiny purpose is the right way to go. This is mostly a Drupal 7 problem as in D8 we use object oriented practices by which we can have as much code as we want because we only load the necessary parts when needed.

So what do you think? Do you have a general rule when it comes to this decision or you also take it case by case basis? If the latter, what are your criteria for informing your choice? If the former, why is this? I'd love to hear what you have to say.

Jun 28 2015
Jun 28

I don't use the built in update functionality provided by the Update module for updating code. I like to use it for reminders and push back statistics of modules used for Drupal.org. However, someone people do use it. Sometimes this piece of functionality can fail and throw an interesting message which doesn't seem to have many answers despite the best Google-fu.

UpdaterException: Unable to determine the type of the source directory. in Updater::factory() (line 99 of ../www/includes/updater.inc).

An exception because a source directory can't be determined. Turns out that the logic before the exception throw is checking if the temporary directory containing update downloads is an actual directory.

It is looking for the following:

$directory = 'temporary://update-extraction-' . _update_manager_unique_identifier();

If you encounter this error, check your file system settings (/admin/config/media/file-system) and make sure the temporary directory is configured properly. If you're not sure what it should be, remove the entry and resave the form, populating your web server's default. Then make sure Drupal (or, rather, your webserver) has permissions to write to that temporary directory.

Jun 28 2015
Jun 28

Recently you must have heard of the term "headless Drupal". You may be wondering what exactly it is. How is it different than standard Drupal and how can you implement it? If these are the questions that are plaguing you, then this is the post for you.

Conceptually headless Drupal is pretty simple. It involves two changes from standard Drupal:

  1. Instead of spitting out HTML, Drupal spits out data in JSON format.
  2. A front-end UI framework, such as AngularJS, EmberJS or React, renders the data to create a webpage.

Why do you need headless Drupal?

As you must be aware, recently there has been an explosion of front-end JS frameworks and Drupal's UI has not been able to catch up with it. With headless Drupal, you get the best of both worlds. You can use a new JS front-end framework backed by Drupal's reliability and flexibility in the back-end. That said, there are multiple ways you can implement this, depending on how integrated your UI is with Drupal. Here are two ways that are popular:

1) Drupal is totally de-coupled from front-end UI.

In this case, front-end UI doesn't really care what the back-end technology is. It only cares about the APIs that the back-end is exposing. Here's a very simple example: in the front-end, you have a plain HTML file with some JS. When this HTML page loads, JS executes a back-end API call to get the data. Usually, you will parse the URL and pass in the id (for e.g. node id) to the back-end in the API call. On the back-end, Drupal uses Services or RestWS module to respond to the request received from the HTML file. Instead of responding with HTML, it returns JSON back to the HTML file. Now JS in the HTML file parses the received JSON and renders it to create the full HTML page. Advantage of this approach is that the back-end is totally decoupled from front-end. So you can replace Drupal with any other technology in future, or Drupal 8 for that matter, without making any change to the front-end as long as the APIs remain the same.

2) Drupal renders the HTML page in the above example.

The above scenario will work very well if data that the HTML file is requesting is available to anonymous users. For authenticated users, it gets slightly more complicated. If your HTML file is on a different domain than Drupal's, JS will first need to make a back-end call to log in and then make a second call to get the data. On the other hand if your HTML file is on the same domain as Drupal's, then you are essentially doing theming twice. Irrespective of the page you are on, you want the site layout and template to look similar. If Drupal is completely decoupled from the front-end for some pages, then on those pages, you will need to make the same theme/template on front-end HTML file as the one Drupal has. To avoid this, you can let Drupal respond with the base HTML with the same JS and let the JS call Drupal's API on page load. As an example, you want to implement the headless Drupal technique on the page /ui. First you create this page callback in hook_menu(), then create an html--ui.tpl.php which has all the regions that you need on the page. Usually you will keep the header and footer so that the page looks part of the same site. Now using a preprocess function, add the relevant JS on that page. This is the JS which will call Drupal's API endpoint to get data in JSON format. The advantage of this approach is that since the HTML is being rendered by Drupal, you will not need to duplicate the theming work and if the user is authenticated in Drupal, then he will be authenticated to make the back-end API call as well. There is no need for him to log in through the Services first (you'll need to enable SESSION authentication in services for this). The disadvantage of this approach is that in future, if you want to replace Drupal by some other technology, then front-end will need to be replaced as well.

In next articles, we'll take you through a series of steps starting with the simplest example of headless Drupal and ending with a full application that uses a front-end JS framework communicating with Drupal using APIs. In the meantime, you can look at our post on integrating AngularJS with Drupal which provides an example of building a calendar using option 2 above.

Jun 28 2015
Jun 28

ChainedFastBackend is a new cache backend in Drupal 8 that allows you to chain 2 cache backends.

In order to mitigate a network roundtrip for each cache get operation, this cache allows a fast backend to be put in front of a slow(er) backend. Typically the fast backend will be something like APCu, and be bound to a single web node, and will not require a network round trip to fetch a cache item. The fast backend will also typically be inconsistent (will only see changes from one web node). The slower backend will be something like Mysql, Memcached or Redis, and will be used by all web nodes, thus making it consistent, but also require a network round trip for each cache get.

This is awesome news because:

  • CLI (drush/console) performance is heavily boosted because these will not share in-memory caches with your web application, and thus always feel like starting on cold caches. Not again.
  • Some deployment procedures  make in-memory caches to be lost, and unless you heat up the caches right after deployment your application is starting from a very cold state. If you are on an emergency update under heavy load, starting from a cold state with high traffic can have very bad side effects.

On small sites we usually go for a nearly everything cached in-memory so that they run AFAP (as fast as possible) and the two previous situations are really an issue.

Backporting the ChainedFastBackend to Drupal 7 is quite straightforward. If you are using the Wincache module ChainedFastBackend is already available for use!

As always, I only recommend enterprise grade and robust storage backends, so you should be giving Couchbase a shot as an alternative to Reddis or even Mongo. This is what the big guys are using (paypal, etc..) and it gives you persistent memached compatible storage.

For those that don't know what Couchbase is all about, if you are already using Memcache, you can hot swap with 0 issues to Couchbase because it has a Memcache compatibility layer, with the benefit that you can choose you storage to be persistent.

To setup your ChainedFastBackend follow these simple steps.

Register your cache backends in settings.php:

$conf['cache_backends'][] = 'sites/all/modules/contrib/wincachedrupal/drupal_win_cache.inc';
$conf['cache_backends'][] = 'sites/all/modules/contrib/wincachedrupal/ChainedFastBackend.inc';
$conf['cache_backends'][] = 'sites/all/modules/contrib/memcache/memcache.inc';

Tell the chained backend what it should use as fast and persistent backends:

$conf['fastBackend'] = 'DrupalWinCache';
$conf['consistentBackend'] = 'MemCacheDrupal';

Now distribute the binaries at your will, and make sure that regularly updated binaries are not sent to the Chained backend:

$conf['cache_default_class'] = 'ChainedFastBackend';
$conf['cache_class_fastcache'] = 'DrupalWinCache';

$conf['cache_class_cache_views'] = 'MemCacheDrupal';
$conf['cache_class_cache_form'] = 'MemCacheDrupal';
$conf['cache_class_cache_update'] = 'MemCacheDrupal';
$conf['cache_class_cache_menu'] = 'MemCacheDrupal';

Because this backend will mark all the cache entries in a bin as out-dated for each write to a bin, it is best suited to bins with fewer changes.

What kind of improvements can you expect?

Our deployment script triggers a custom crawler that heats up the application for anonymous and logged in users. On the first site we tested this on, the first hit of the crawler after deployment went down from 13seconds to a mere 1.3seconds. Nice, but not surprising as we are moving from a completely lost in-memory caches situation, to an everything is still in Couchbase.

You will only see performance benefits if already using in-memory caching (APC/Wincache) or if you cache backend is volatile (like regular Memcache).

Jun 26 2015
Jun 26

We always say that you come for the code but stay for the community, but for some attendees, affording a DrupalCon to interact with the community is a hurdle.  We were happy that we were able to offer 20.000€ between 23 grant and scholarship recipients to help lessen that burden and really help them make an impact in Barcelona.  You can see who received a grant or scholarship here.  

These attendees use this aid to make a huge impact within the Drupal Community.  Recipients will use their time at DrupalCon to make connections, further the project, learn things that they can share with their local communites, and truly help get D8 out the door. We are so happy to welcome them to DrupalCon Barcelona. Here are a few recipients' plans on how they will use DrupalCon to make an impact:

  • Alina Mackenzie (alimac) will be attending DrupalCon on a grant and apart from giving and attending sessions, she wants to focus on the future of core mentoring - a hugely important way to contribute!  She will also make a huge impact by presenting the First-Time Sprinter Workshop on Friday, 25 September. 
  • Luis Angel Hañari Coyla (langelhc) attended DrupalCon Latin America and met some 'super drupalers' as he calls them; he was so inspired that he applied for a scholarship to attend DrupalCon Barcelona to continue learning from the larger community and bring that knowledge back to Peru to share and help others in the way that he says Drupal has helped him.
  • Alvaro Hurtado (alvar0hurtad0), an active Spanish Drupaler, is attending and will use his time in Barcelona to further help the project and community grow in Spain.  He recognizes the importance of interacting with the community at the largest European Drupal event and because he says he lives and breathes Drupal, he can't imagine not being there!
  • Dan Mulindwa (dcmul) is looking forward to the honor of sitting down with Drupalers from around the world and using the knowledge he gains in Barcelona back to Uganda and mentor Drupalers there!
  • Ayesh Karunaratne (Ayeshis a Sri Lankan who is active in the South Asian communities and has watched DrupalCon from afar but will be attending his first DrupalCon thanks to the grant he received. He says that he understands that Drupal is not just about the code and by attending DrupalCon he will really get to experience the larger Drupal community firsthand.

For a breakdown of how the Grant and Scholarship program looked for Barcelona, check out our inforgraphic below:

Jun 26 2015
Jun 26

For a new twist on our keynotes, we are excited that our Thursday mainstage will highlight two community speakers that both have important and intersting topics that all Drupalers can benefit from hearing.  

Submitted as regular sessions, Mike Bell's (mikebell_proposal about mental health in the open source world and David Rozas' (drozastalk about the phenomia of contributing to a community, really struck a chord with the Track Chairs who select the DrupalCon session content.  Although these talks did not fit into an established track, they were important issues that stood out and we felt that they should be brought to light and what better place to get them heard than the keynote of a DrupalCon?  

Mike Bell's proposal highlighted that "Mental health is a subject that touches every single one of us within the community. It's something that people find incredibly difficult to talk about and yet it's so important."  His talk will use his own personal experience as a jumping off point to provide an opportunity to start disucssions on the topic within the community. With many community members commenting on the session proposal, the consensus is that this valuable talk will make a strong impact on the community and reminding us that we are humans!

David Rozas' talk presents findings from his PhD research on the Drupal Community and Commons-Based Peer Production.  He will highlight empirical data on the types of contribution whose focus of action is directed towards the community, meaning contribution beyond source code.  Once the facts are presented he explains why they are important and how we can use that information to build a stronger community that highlights both code and community contributions.

Presented in two 25-minute Community Keynotes, we look forward to bringing this topics to the Drupalistas of DrupalCon Barcelona and are honored to have our own community members contribute in such a fantastic way.  

We look forward to seeing you at DrupalCon and participating in these keynotes along with you.

868
Jun 26 2015
Jun 26

June 26, 2015

Open Source's Impact on Government

Drupal has made a name for itself as the go-to solution for .gov website projects that need to be cost-effective, secure and scaleable across dozens or hundreds of agency sites.

.GOV Websites Declare Independence from Proprietary CMS Solutions

Relying on proprietary CMS solutions in the government space would inevitably lead to disgruntled users who consistently remark on the clunky workflows, heavy reliance on technical staff to make changes and diminishing value as the platforms age and require more and more maintenance.

Many federal agencies have been making great use of Drupal in that past decade, most notably whitehouse.gov, but the past several years have seen a number of state and local agencies adopt Drupal as a CMS as well. Lean and mean is the modus operandi for many state agencies as they've come to rely more on code and less on bureaucratic staff to do the heavy lifting of making government work.

Check out our infographic on some famous Drupal sites across the nation, and maybe you can enjoy the Independence Day holiday on a whole new level knowing that open source solutions are helping government better serve its citizens. Also, don't miss your chance to say hello to us when we join the Drupal4Gov community in Washington, DC this July!

Top .GOV page 1_0.png 

Fill out the form to download the complete infographic!

Tags/Categories

Related Posts

Jun 26 2015
Jun 26

Drush "user-list" Command

Submitted by John Bickar on June 26, 2015 - 9:05 am Dev tricks

A simple task: list all the users on a site, optionally filtering by role or by status. Difficulty: using drush.

I had searched around a bit for this functionality and my Google-fu had failed me, so I decided to build off of work that already had been done and write a drush user-list command. (The internal monologue went something like this: "Is this a thing? It doesn't look like this is a thing. This should be a thing. Why is this not a thing? Let's make this a thing.")

Installation

Download user_list from Github or drupal.org and place it where you place the rest of your drush commands (e.g., in ~/.drush/ or /usr/share/drush/commands/)

Use

drush user-list

Returns a table of all users on the site.

drush user-list --status=active

Returns a table of all active users on the site.

drush user-list --roles="administrator,site owner"

Returns a table of all users with the role "administrator" or "site owner" on the site.

That's pretty much it.

Categories:  Tips and Tricks Tags:  development Drupal Planet

Add new comment

Your name Subject Comment * By submitting this form, you accept the Mollom privacy policy.
Jun 26 2015
Jun 26

Ani Gupta, Drupal Mumbai community lead, StartupNext lead, formerly at Axelerant in India, and I got the chance to continue the conversation I began with Piyush Poddar at Drupal Camp London about the changing face of IT and open source in India. Under the heading "from consumption to contribution" we talk about India's move from being perceived as being good for cheap, outsourced code to being a place rich with brands and startups in their own right and the home to much open source contribution. We also talk about old versions of Drupal, the Drupal community and its mentoring culture, open source acceptance in business and government, and more!

Open source and Drupal in India

"India is infamous as a very cheap destination for software services and the sweat-shop-like factories that people have set up of 100s of coders hacking away on machines. To some extent that is still true. I personally--and a lot of my colleagues and peers--have felt that India actually has an amazing amount of talent. That is evident today. It's been a challenge for us to actually establish India as a destination to source really good, fantastic software development work. In the last 5 years, there have been a lot of new changes to the landscape. A lot of that is because of open source software, communities like Ruby and Drupal, and also the startup boom that is happening in India in the last three years."

Why are there so many software developers in India? "Because it's easy to get a certification in Microsoft .NET or Java or something and people got that and quickly got a job. That was a very big consideration for a lot of people. It was based around 'How can I be secure and get a good job?' Today, things are changing quite rapidly. Two things have happened simultaneously. One is that the open source community exploded. Drupal has exploded since 2011, when Dries came to India. Everybody who was providing Drupal services became aware of the larger community and why contributions were important. The discussions started happening. Business have started moving towards a contribution culture; they understand the ROI. But more importantly, developers themselves became aware that becoming part of an open community and sharing ideas actually makes them stronger and better."

"The startup culture has been very important in terms of showing that there is a better was to enjoy software and develop really cool applications, and make a lot of money as well in India. There is massive demand for really good talent. The startup scene in India has provided very profitable homegrown companies that acquired a massive amount of really good talent at really good salaries. The startups are definitely looking inward and unlike before, even the startups are solving Indian problems."

"I do believe that the culture is carrying over. A lot of professional services companies, especially those built around open source software, they understand that without contributing back to the community--be it Ruby, or Python, Wordpress, or Drupal--they understand that if they're not part of the community then they're actually losing out in a massive way. Branding becomes very important. People start talking about these companies that are contributing back. It's important. Productivity and motivation is not dependent on how much money you can throw at somebody. Motivation is going to come from getting that person happy. How does that person get happy? By actually building things and owning that and getting recognized for that. That's what open source communities provide."

The last three to five years has been a complete [180 degree turn] as I see it. Today somebody who wants to set up a company doesn't think 'I'm going to get an Adobe license or a Microsoft license.' They're going to say, 'Okay, I'm going to get into Drupal or Ruby. I'm going to start building applications and I'm going to provide those services.' That's a completely different thing from what was happening before.

More on Drupal in India

Guest dossier

Interview video

[embedded content]

Jun 26 2015
Jun 26

One solicitation for a gig that I recently received was for a back-end developer. It contained the list of need-to-knows that I expected: PHP, Drupal module writing, OOP, MySQL, and so on. However, there was another list below that one, containing more technologies, with which the applicant should be “bullet-proof,” including JavaScript, jQuery, Angular, CSS, Sass, HTML5, PEAR, Apache, Linux shell-scripting, and several more. Despite the job heading, this was clearly a request for a full-stack developer.

 Stack of pancakes

Now, I actually can write JavaScript, jQuery, CSS, and HTML5, manage my own Linux server, and configure Apache domains, but I am far from being “bullet-proof” with these technologies. It would make little sense for someone to pay me by the hour to rip my hair out over CSS and JavaScript issues rather than using someone who is adept with them.

Is it because I’m not capable of being bullet-proof? Nope. Over the centuries I’ve written countless programs in 370 Assembler, 1130 Assembler, Mnemonic Assembler, APL, PL/1, Fortran, COBOL, Basic, SPL (the HP one), and on and on. Even Xerox 9700 form language. I was one of those people in the university comp-sci department that people hate; someone who never had to study. Don’t misunderstand. It wasn’t a matter of being better, necessarily, but that my thought process is the jelly to programming logic’s peanut butter. I have known many developers who could code circles around me, but I also have a wealth of experience combining the ability to internalize a client’s business issue and visualizing a solution for crafting it. So, if I think that it’s not simply a matter of aptitude, then what is it?

I have a bit in common with Steve Jobs. Huh? Yeah, really. We both grew up about the same time, were exposed to early computing at a young age, had our first programming experiences via timeshare teletype terminals and early CPU’s with toggle switches, were completely seduced by how software improved and fulfilled our thought process, and we both had part of our foundation poured at Hewlett-Packard.

He was a visionary, predicting with certainty 20 years ago that the future of retail was going to be on the Web. He was able to do this, in part, because he noticed things more than other people, often at a visceral level. One thing Steve noticed was that the difference between middling and exceptional, for most things, was 1:2 – more often just 20%. Except, that is, for software, which stands virtually alone in technology with the difference between middling and exceptional being 1:50 or more, a vast difference.

He felt software was an art, and should be considered a liberal art. One thing this illustrates is that in software, like in other forms of art, there is a spectrum in which falls builders and artisans. The difference between the two is not caliber, but the direction in which they apply themselves, often a matter of taste, neither being better than the other.

I often use the analogy of a “pile” person versus a “file” person. I happen to be the former. I am more comfortable with stacks of paper around me than having the paper filed away, because I become paralyzed when trying to decide how to organize the files, and then I forget to look in the files. And just like piles are deep and files are wide (well, the lateral ones), so needs to be the focus of the software craftsman versus the full-stack developer.

This isn’t a value judgement nor a comparison of the worth of the work product, it’s simply an explanation of the space in which the person needs to spend their time, as contrasted with where they want their time spent. Neither breadth over depth nor vice-versa, but you have to select one or the other. Me, I enjoy understanding the big picture of the client’s need and crafting the solution. As with Steve Jobs, I prefer to focus on product rather than process, which makes my eyes glaze over (just ask anyone who watched me absorb git).

Early on, I chose depth over breadth: a writer rather than a linguist, a Harley mechanic rather than an engineer. With that decision comes the reality of having neither the time nor the desire to develop depth in each of the technologies in the full stack.

Breakfast time. I think I’ll head out for a short stack.

Image: "Pancakes" by hedvigs is licensed under CC BY 2.0

Other articles from this issue:

Articles

Managing Video Content in Drupal

Diána Lakatos

Drupal Watchdog now has video on its website. Can the rest of the world be far behind? Here are a few of the many methods to enliven the user experience.

Coming Soon

Five Ways to List Content

Emma Jane Westby

In Drupal, there’s no single enforced path into content; there’s more than one way to skin the cat. Taxonomies, vocabularies, and other options abound. Take a look.

Coming Soon

Features

Using Drupal to Build Your Product

Kristof Van Tomme

No, you don’t have to spend years writing code in order to set your startup in motion. Instead of building that prototype, just create a pretotype, which lets you toss a lot of stuff on the wall to see what sticks. What’s a pretotype? Read on.

Coming Soon

Jun 26 2015
Jun 26

This was our fifth critical issues discussion meeting to be publicly recorded in a row. (See all prior recordings). Here is the recording of the meeting video and chat from today in the hope that it helps more than just those who were on the meeting:

[embedded content]

Unfortunately not all people invited made it this time. If you also have significant time to work on critical issues in Drupal 8 and we did not include you, let me know as soon as possible.

The meeting log is as follows (all times are CEST real time at the meeting):


[11:04am] jibran: Issues https://www.drupal.org/project/issues/search/drupal?status[0]=1&status[1]=13&status[2]=8&status[3]=14&status[4]=4&priorities[0]=400&categories[0]=1&categories[1]=2&categories[2]=5&version[0]=8.x
[11:07am] dawehner: https://www.drupal.org/node/2509300
[11:07am] Druplicon: https://www.drupal.org/node/2509300 => Path alias UI allows node/1 and /node/1 as system path then fatals [#2509300] => 55 comments, 5 IRC mentions
[11:07am] dawehner: https://www.drupal.org/node/2408371
[11:07am] Druplicon: https://www.drupal.org/node/2408371 => Proxies of module interfaces don't work [#2408371] => 71 comments, 14 IRC mentions
[11:13am] plach: alexpott: dawehner GaborHojtsy: WimLeers: hamletic questions in the critical meeting
[11:13am] GaborHojtsy: plach: :P
[11:13am] plach: :)
[11:14am] WimLeers: "hamletic", wow :D
[11:14am] plach: https://www.drupal.org/node/2478459
[11:14am] Druplicon: https://www.drupal.org/node/2478459 => FieldItemInterface methods are only invoked for SQL storage and are inconsistent with hooks [#2478459] => 105 comments, 26 IRC mentions
[11:14am] plach: https://www.drupal.org/node/2453153
[11:14am] Druplicon: https://www.drupal.org/node/2453153 => Node revisions cannot be reverted per translation [#2453153] => 134 comments, 42 IRC mentions
[11:14am] dawehner: alexpott: i mean our request context ->getCompleteBaseUrl is basically that
[11:15am] alexpott: dawehner: yep
[11:16am] dawehner: GH sadly does not allow you to filter by 3.0.issues
[11:16am] WimLeers: that's weird
[11:16am] jibran: https://www.drupal.org/node/2500523
[11:16am] Druplicon: https://www.drupal.org/node/2500523 => Rewrite views_ui_add_ajax_trigger() to not rely on /system/ajax. [#2500523] => 27 comments, 6 IRC mentions
[11:16am] dawehner: alexpott: https://github.com/symfony/symfony/issues/6406#issuecomment-58411133
[11:19am] catch: https://www.drupal.org/node/2470679
[11:19am] Druplicon: https://www.drupal.org/node/2470679 => [meta] Identify necessary performance optimizations for common profiling scenarios [#2470679] => 62 comments, 15 IRC mentions
[11:19am] catch: https://www.drupal.org/node/2497185
[11:19am] Druplicon: https://www.drupal.org/node/2497185 => Create standardized core profiling scenarios and start tracking metrics for them [#2497185] => 36 comments, 11 IRC mentions
[11:19am] GaborHojtsy: lol, my chrome died, is the meeting still running? :)
[11:19am] alexpott: GaborHojtsy: yes
[11:19am] WimLeers: https://docs.google.com/spreadsheets/d/1iTFR2TVP-9961RUQ4of-N7jZLTOtfwf3...
[11:19am] dawehner: GaborHojtsy: yes
[11:20am] GaborHojtsy: yay I got back the controls when it reopened
[11:20am] GaborHojtsy: huh
[11:20am] GaborHojtsy: will still be able to stop broadcast, etc.
[11:20am] WimLeers: nice!
[11:20am] Druplicon: darn tooting it sure is!
[11:21am] WimLeers: https://www.drupal.org/node/2429287
[11:21am] Druplicon: https://www.drupal.org/node/2429287 => [meta] Finalize the cache contexts API & DX/usage, enable a leap forward in performance [#2429287] => 105 comments, 8 IRC mentions
[11:22am] WimLeers: https://www.drupal.org/node/2450993
[11:22am] Druplicon: https://www.drupal.org/node/2450993 => Rendered Cache Metadata created during the main controller request gets lost [#2450993] => 104 comments, 20 IRC mentions
[11:22am] WimLeers: https://www.drupal.org/node/2351015
[11:22am] Druplicon: https://www.drupal.org/node/2351015 => Link CSRF tokens can be hijacked when cached with insufficient contexts [#2351015] => 98 comments, 35 IRC mentions
[11:24am] WimLeers: https://www.drupal.org/node/2429287
[11:24am] Druplicon: https://www.drupal.org/node/2429287 => [meta] Finalize the cache contexts API & DX/usage, enable a leap forward in performance [#2429287] => 105 comments, 9 IRC mentions
[11:25am] WimLeers: https://www.drupal.org/node/2487600
[11:25am] Druplicon: https://www.drupal.org/node/2487600 => #access should support access result objects or better has to always use it [#2487600] => 17 comments, 1 IRC mention
[11:27am] WimLeers: https://www.drupal.org/node/2493033
[11:27am] Druplicon: https://www.drupal.org/node/2493033 => Make 'user.permissions' a required cache context [#2493033] => 18 comments, 4 IRC mentions
[11:29am] WimLeers: https://www.drupal.org/node/2473873
[11:29am] Druplicon: https://www.drupal.org/node/2473873 => Add cacheablity support for entity operations [#2473873] => 26 comments, 3 IRC mentions
[11:34am] GaborHojtsy: https://www.drupal.org/node/2512460
[11:34am] Druplicon: https://www.drupal.org/node/2512460 => "Translate user edited configuration" permission needs to be marked as sensitive [#2512460] => 2 comments, 1 IRC mention
[11:36am] GaborHojtsy: https://localize.drupal.org/node/63903
[11:36am] Druplicon: https://localize.drupal.org/node/63903 => Test the staging version of localize.drupal.org on Drupal 7 NOW! => 0 comments, 4 IRC mentions
[11:36am] dawehner: alexpott: would you be okay with adding \Drupal\Core\Http to add things like TrustedRedirectResponse there?
[11:46am] plach: https://www.drupal.org/node/2507899#comment-10059322
[11:46am] Druplicon: https://www.drupal.org/node/2507899 => [policy, no patch] Require hook_update_N() for Drupal 8 core patches beginning June 29 [#2507899] => 34 comments, 5 IRC mentions
[11:48am] dawehner: https://www.drupal.org/node/2509898
[11:48am] Druplicon: https://www.drupal.org/node/2509898 => Additional uncaught exception thrown while handling exception after service changes [#2509898] => 3 comments, 1 IRC mention
[11:49am] • jibran hates this cycle of exception rendering.
[11:50am] WimLeers: https://www.drupal.org/node/2450993
[11:50am] Druplicon: https://www.drupal.org/node/2450993 => Rendered Cache Metadata created during the main controller request gets lost [#2450993] => 104 comments, 21 IRC mentions
[11:54am] dawehner: https://www.drupal.org/node/2489024
[11:54am] Druplicon: https://www.drupal.org/node/2489024 => Arbitrary code execution via 'trans' extension for dynamic twig templates (when debug output is on) [#2489024] => 24 comments, 9 IRC mentions
[12:03pm] WimLeers: https://www.drupal.org/project/issues/search/drupal?project_issue_follow...
[12:04pm] plach: WimLeers++
[12:04pm] plach: (oh boy :D)
[12:04pm] WimLeers: :P
[12:04pm] jibran: slow clap for WimLeers

auk
Jun 26 2015
Jun 26

Hello everyone! Drupal module development offers us a lot of opportunities, and today I would like to tell you about a standard, out-of-the-box Drupal 7 module. I have repeatedly deployed a website based on Drupal 7, but never paid attention to Drupal Book module. What are its features? What does it do?

As we understand from Drupal 7 Book module name, it enables us to create collections of materials (books). We can create various kinds of instructions, frequently asked questions (FAQ), description of modules, selected materials of users, etc. By default, Drupal Book module is disabled. To turn it on, you should follow the link - admin> content> book> settings

Use of Drupal Book module

  • Managing books. For books, you can separately assign permissions for creating, editing and deleting pages, creating documents in books or creating new books. Users with the permission to manage book content can add various types of documents to books. They can also view the list of all books, edit and reorder pages.
  • Book menu. Apart from the additional elements which appear in the document that is part of the book, the module generates a block with specific references that can be configured in the block management section.
  • Shared work. Books can be edited by different users because they allow users with the appropriate rights to create new pages in the book and include any website pages to the content of the book.
  • Printing. Users with the permission to view the printer version will see links to this version on the book pages.

Book in comparison with taxonomy (classification)

Classification is created by the website administrators. The authors of articles are able to select the desired category in the classification. But often they make the wrong choice or classification does not include rarely used options.

A “book” of ready articles is a way for the administrators to combine materials on one subject, created by different authors. Both taxonomy and the book classify the material. But while in taxonomy authors select the category themselves, the book is created by the administrators in accordance with their needs.

Also in taxonomy, a separate flow of information is created for each category element. From time to time, articles appear there, the quality of which may be quite different - from good to absolutely amateurish, or the article can be completely irrelevant. And the book selects only the best articles.

Therefore, a book is always more compact than taxonomy, and its quality is always higher. We can say that taxonomy divides the information flow into dynamic flows on different subjects. A book carefully selects the best and the most relevant of all materials, collecting them in a format that is easy to view.

A book is created in shared work of many authors. The Drupal Book module perfectly implements the idea of shared work and allows to include pages created by many authors. I could call them public pages. While in ordinary articles and commentaries the authorship is strictly supported, and only editors and administrators can edit the text, in public pages the number of editors runs up to the number of users who have the right to edit a public page. Even guests can be included there. The last author who made changes is displayed as a participant of the public page.

The history of versions is kept. Users with appropriate permissions can use the version control system to switch between them. For more convenient display of these versions, you can use the Diff module.

Examples of Drupal Book module use

  • The documentation on drupal.org is written using this module https://www.drupal.org/documentation/modules/
  • On the main page of the book, you can see the following links: "the previous page name”, "level up", "the next page name" (this is characteristic of Drupal Book module use)
  • A striking example of this module use is a site completely built on this module. Many of you have used it without knowing this http://githowto.com/.

Recommendations on Drupal Book module use

  • Since the first days of website life, you can take the opportunity to use other people’s books to create at least FAQ. Books can be made even on sites with a couple of dozens of articles.
  • When you sharpen your skills in creating books and get a community on your website, proceed to public pages. For a start, make a book with several pages of instructions that reflect the website theme. Explain to visitors who have the permission to edit pages that they are expected to help in editing.
  • Only when the community on the website becomes very big and the opportunities of the book are no longer enough, you can start thinking of (but not yet install) wiki modules. It’s not worth considering wiki before the amount of your daily visitors reaches 5-10 thousand a day. The Book module functionality completely covers the needs of such a community.

For example, Drupal.org has over 22.4 million unique visitors a month. Such things as FAQ and documentation of modules are implemented through books. But for some reason wiki modules are not installed on Drupal.org, although the might of the Drupal development community would be quite enough to fill them with wiki materials and support them.

The thing is that everyone knows the wiki concept - not only Drupal developers. As to the book concept, only some of the drupalers know it.

The modules that extend the standard features of Drupal Book module

Outline Designer is a module that makes book management more intuitive

Hidden Nodes module adds a system similar to publish / not publish node with distinction of roles.

Book Copy module allows users to copy full books or subtrees of books.

Book Title Override module allows you to change title of the page displayed in the menu.

Book Library module allows you to classify books by categories, it provides easier management

Booktree module displays the hierarchy as a tree for 1 book

Diff shows the difference between the versions of pages in the book

Description of Drupal Book module work

Settings for the types of content:

Drupal 7 Book module

The list of your books will look like this:

Drupal 7 Book module

List of book pages:

Drupal 7 Book module

Note: if you change the page titles, they will change not only in the menu list, but also on the pages themselves.

Main page of the book:

Drupal 7 Book module

Drupal 7 Book moduleTo navigate through the books, you can use a standard block provided by the module.

In order to start using it, you just need to enable it on the block management page.

(admin > structure > block)

Here is an example of one side of the book. The main difference from the home page is that there is no block to display the hierarchy of all pages (between the page contents and links).

Drupal 7 Book module

When you create a page for book at the bottom of the page, you need to choose a book and a parent element.

Drupal 7 Book module

Access rights:

For easier editing of the book, you can use the Outline Designer module. The result of the module work:

Drupal 7 Book module

Modal window of adding pages:

Drupal 7 Book module

Module settings: (admin > content > book > settings)

Drupal 7 Book module

(admin > content > book > outline_designer)

Drupal 7 Book module

To hide the display of some pages, you can use the Hidden Nodes module.

Module settings:

Drupal 7 Book module

Access rights:

If you need to create a copy of the book with all its pages, you can use the Book Copy module. The result of the module work:

Drupal 7 Book module

After following the "copy book" link, you can give a name for the book copy. All the pages of this book will be automatically copied.

If you need to change the name of the page displayed in the book contents, you can use the Book Title Override module.

Example of the module use:

Drupal 7 Book module

To create a categorization of books, you can use the Book Library module. Result of the module work:

Drupal 7 Book module

Drupal 7 Book module

To make the notice absent, you need to install a dev version (proof). Note: after adding this module, Book Copy stopped working. To better management of the version control system, you can use the Diff module. Result without the Diff module:

Drupal 7 Book module

Result with the Diff module:

Drupal 7 Book module

Drupal 7 Book moduleUseful links

Conclusion

You need to use Drupal Book module since the first days of your website life. It can be used for: creating various types of instructions, creating the FAQ section and adding content to it, organizing co-authorship of pages, etc. Of course many of you will say “why to use this module if there are wiki-type modules.

In my opinion, there’s no point in using wiki-type modules like wiki if you can use the Drupal Book module, which will use the resources of the web server rationally.

Jun 26 2015
Jun 26

If you are interested in Drupal community and you are coming to Drupalcon, we are looking for your opinions!

Take the Community Content at DrupalCon Survey.

For a very long time, community conversations at DrupalCon took place at sessions in the community track, which ran alongside all the other content at DrupalCons. The community track allowed for presentations on topics related to our community. While it was good to be able to raise the topics, there were real concerns that the session format meant that nothing productive came from the conversation. Further, the community track was just much less attended than other sessions, with 25 or so folks in a room that holds 200.

At DrupalCon Prague in 2013, we launched the first Community Summit. Held on the Monday of DrupalCon week, it is a day-long event designed in an un-conference style to bring community members together to tackle the issues that help our community achieve more together. Morten DK, Addison Berry, and others (bless you all - you have been great collaborators) ran the program and led a number of very useful conversations.

For the last few DrupalCons we have run these community summits, and have heard a whole new round of feedback, including:

  • Work at the Summit does not tend to continue after the Summit, so we lose momentum.
  • We don’t always have the right people in the room to really solve some of these problems.

We also did some surveying of our community leaders at the beginning of last year and heard that they are very hungry for skills training that can help them take their camps, meetups, trainings, etc. to the next level. They want to learn about how to manage the finances of a camp, how to recruit sponsors, and how to be better public speakers. The current community summit format does not really allow for this kind of skills training either.

So - we are looking for your feedback about how we might restructure the community content at DrupalCons.

Take the Community Content at DrupalCon Survey.

Just so you know, I hit the woods with my family and no internet and no phones for one week a year, and that week is next week. I won’t be able to respond to comments or the survey while I’m gone, but I will do so when I return. If you have feelings, ideas, or feelings about ideas - stick them in the survey! I’ll share the results back out. If you have book recommendations for my trip, hit me up on Twitter.

-----------------------

Holly Ross
Executive Director
Drupal Association

Jun 26 2015
Jun 26

If you are interested in Drupal community and you are coming to Drupalcon, we are looking for your opinions!

Take the Community Content at DrupalCon Survey.

For a very long time, community conversations at DrupalCon took place at sessions in the community track, which ran alongside all the other content at DrupalCons. The community track allowed for presentations on topics related to our community. While it was good to be able to raise the topics, there were real concerns that the session format meant that nothing productive came from the conversation. Further, the community track was just much less attended than other sessions, with 25 or so folks in a room that holds 200.

At DrupalCon Prague in 2013, we launched the first Community Summit. Held on the Monday of DrupalCon week, it is a day-long event designed in an un-conference style to bring community members together to tackle the issues that help our community achieve more together. Morten DK, Addison Berry, and others (bless you all - you have been great collaboratrs) ran the program and led a number of very useful conversations.

For the last few DrupalCons we have run these community summits, and have heard a whole new round of feedback, including:

  • Work at the Summit does not tend to continue after the Summit, so we lose momentum.
  • We don’t always have the right people in the room to really solve some of these problems.

We also did some surveying of our community leaders at the beginning of last year and heard that they are very hungry for skills training that can help them take their camps, meetups, trainings, etc. to the next level. They want to learn about how to manage the finances of a camp, how to recruit sponsors, and how to be better public speakers. The current community summit format does not really allow for this kind of skills training either.

So - we are looking for your feedback about how we might restructure the community content at DrupalCons.

Take the Community Content at DrupalCon Survey.

Just so you know, I hit the woods with my family and no internet and no phones for one week a year, and that week is next week. I won’t be able to respond to comments or the survey while I’m gone, but I will do so when I return. If you have feelings, ideas, or feelings about ideas - stick them in the survey! I’ll share the results back out. If you have book recommendations for my trip, hit me up on Twitter.

Pages