Dec 01 2015
Dec 01

This week a lot got undone, broken, recovered and then some.

Worked on the product backlog, not quite ready for public consumption yet but getting there, sprint backlog for the week:

  • Shop for VPS
  • Setup VPS 
  • Migrate to VPS
  • Fonts – via CSS
  • Sort out Contact Form (emails not working)
  • Sort our Domain name and DNS stuff (may need an expert’s assistance)
  • Backlog grooming – WIP

Acquia Cloud Professional would be nice, would make life much easier, support would be kick ass (and needed) but is out of my budget! time to count the pennies and find a candy store that fits the budget. or…. went with DigitalOcean, gives an SSD, quite a bit of computing power on a budget, has no developer tools though, will need to get devops help and learn some devops stuff myself (kind’a and kind’a not looking forward to that) but hey you get what you can afford!

  • Added an SSH key, instructions easy enough to follow
  • Am in as root! (nice!)
  • its an Ubuntu VPS, LAMP stack, phpmyadmin installed
  • explored setting up DNS and nada – haven’t got time for this, my sprint capacity is significantly reduced this week and possibly the next too! can’t wait, time to call in devops help, Asim enlisted to help set up DNS for for the VPS and for my social transformation site (thank you Asif)
  • With not much to do, dived into CSS architecture (for Drupal 8)… 10 mins later… need to find an idiot’s guide to CSS in Drupal!
  • Had good wins today, the fear of the terminal is dissipating.
  • Need to migrate my site from Acquia Cloud to the new VPS environment.
  • Installed backup and migrate, activated it and disaster strikes! backup and migrate broke the site and can not access the extend page to uninstall.



  • Looked up uninstall backup_migrate using Drush since I could not access the extend page – nada!
  • But if I go to an invalid URL it seems to work but can’t access anything in the admin menu, insanity!



  • Tried disabling using Drush (drush dis -y backup_migrate && drush pm-uninstall -y backup_migrate), did not work, tried a bunch of stuff, whatsoever google threw up as candidate solutions.
  • Decided to take the simplest option and restored the site from backup on Acquia insight, easy enough.
  • I’ll take the small win and call it a day!
  • Started day 4 with a nice surprise, my first contribution! wooHoo.. the joy of little things!
  • It was a tough start, forgot my admin password again (blistering barnacles)! and remained locked out for the a good part of the timebox! tried a number of suggested means to recover the admin password using Drush, it was one fail after another! eventually reached out to @Dakku for help and it turns out its a pretty simple process!
  • Attempted migration from the DB back up – something migrated but not quite, need to figure out what went wrong, the theme didnt quite work even though its Bartik straight out of the box, am beginning to have doubts about maintaining a VPS by myself.

Open Social Broken 001

Open Social Broken 001

  • Am back in but am out of time, more on day 5.
  • bulb


    In terminal type: cd /var/www/html/
  • Once in the directory, type: /usr/local/bin/drush8 uli
  • You will get a return value that looks like: /user/reset/1/1448057351/JY2957SilWctPfNfN1gUQ2bT5lS-NvCwjt3heDqdu5A.
  • Copy everything from “/user/….” onwards and paste it after your domain in the address bar in the browser e.g.
  • Go to that url, this is a one off password change process, you can reset your admin password.

Decision time! I can spend time building my site in D8 with dev tools to support me (on Acquia Cloud) or I can build without them and pick up needed devops skills to manage my VPS; time being the deciding factor am ditching the VPS route and will continue with Acquia Cloud, as for affordability found out as an Acquian I get an environment as an employee benefit! wooHoo! Though it seems this week was not as productive but got a couple of nice wins and picked up some more Drush (the fear of the terminal is dissipating! BTW is a pretty epic resource).
Retro time

  • Shop for VPS
  • Setup VPS –
  • Migrate to VPS (theme isn’t working)
  • Fonts – via CSS
  • Sort out Contact Form (so that it sends out emails)
  • Sort our Domain name and DNS stuff (may need a subject matter expert to assist)
  • Backlog grooming – WIP

Having decided to stay on Acquia Cloud I can focus on the site backlog in week 5, (mental note: need to pick up the MVP backlog items soonish).

Dec 01 2015
Dec 01

Search API is a powerful beast for Drupal 7. With its integration with views, Facet API and other search-related modules, you can build pretty much any kind of search page (and if something's not done already in contrib, Search API and Facet API offer clean ways to extend them). Having said that, it's also true that is a complex and big module (mostly because it's normally used alongside other modules that add plenty of features) that takes some time to learn, before you can take advantage of all its potential.

The problem

We've recently had a project where we were asked to allow users to sort search results by different criteria, one of them being alphabetical order. This feature, quite simple in principle, requires a little bit of extra steps to configure, and can easily catch out people with little experience using Search API. The reason is simple: Using Search API sorts module, is quite easy to allow for A-Z sorting on the title. You just index the title as "String" type, and enable the Alphabetical sorting in the "Sorts" tab.

The problem with that approach, then, is that you can't really search contents by title, because to do so, you need to index it as "Fulltext". And if you index it as "Fulltext", the A-Z sorting won't work. At this point, many would be already thinking about writing custom code to solve the problem. However, Search API has a very powerful feature that makes this task very easy to complete, without having to write a line of code.

Aggregated fields to the rescue

This one was quite new to me until recently. When configuring a Search API search index, one of the things you can control, is what (and how) fields are indexed. With aggregated fields, you can combine some of the fields that are going to be indexed into another new field (that doesn't need to exist elsewhere on your site). You can do this from the "Filters" tab of the index.

search api filters

That way, you can index each field as desired, but also make combinations of different fields, and store them as different data types, so that you can make use of them in Sort options, Facet plugins, etc... The next step, is to scroll down a little bit, and you'll find an "Aggregated fields" vertical tab, under "Callback settings". From there, you can add as many aggregated fields as you want. See below the configuration used to allow for A-Z sorting.

search api aggregated fields

As you can see, I simply select the "Title" of the node, and store it as "Fulltext". After saving an aggregated field, you can go back to the "Fields" tab and check that it appears there. As with any other field, you can select whether to index it or not, change the data type chosen, and assign any "Boost" options. That'd be fairly important in this particular scenario, as we'd probably want to assign the "Title" field a boost higher than the one used for other fields, so that contents with a matching title are more relevant in the search results. The next picture shows a fragment of the "Fields" tab for a Search API index after adding the aggregated field from the previous step.

search api fields

Not so difficult, right? Now we can index the title twice: once to allow for A-Z sorting, and an additional one to actually deliver a search page that works as any user would expect it. Go and have a play with Aggregated fields, you'll probably find they're very useful and can help in plenty of cases!

Dec 01 2015
Dec 01

You set up a Drupal site for a client and after launch, they complain because important things are not happening. Search is not being updated. Backups aren’t being run. Boost is not being cleared. Logs are not being pruned. All these things should run automatically when cron is run (Drupal comes with the ability to run cron automatically) at specified intervals, such as every 3 hours. So when does cron actually run, if not at the specified interval?

Cron set to run every 3 hours

Setting cron to run every 3 hours does not necessarily mean that cron will actually run every 3 hours. This is because it is not truly automatic. Something needs to trigger cron to run. This trigger happens when a user visits the site. As part of dealing with the page request, Drupal will check for the last time that cron has been run and will run cron as part of the request if it is time to do so. If cron is set to run every three hours and three hours has passed since the last time it has run, it will run as part of the page request for the user that happens to hit the page at that time.

But what happens if you have an extremely low traffic Drupal site, application or intranet? If no one visits the site at all, cron will not run. Or if a user visits the site and cron is run, and another user visits the site 2 hours and 59 minutes later, cron will not run because three hours hasn’t passed yet. If a third visitor visits 3 hours later, then cron will run. But that is almost 6 hours since the previous cron run.

Cron runs

What about sites with enough traffic?

So on low traffic sites, relying on cron to run automatically is problematic because the times that cron runs will be erratic. What about sites that get more traffic? They should get enough traffic so that cron will run more or less on schedule. But they will face another problem - the time it takes cron to run. Because cron is being run as part of a page request for a real user, it can slow down the page for that user.

Solution: Don’t rely on automatic cron runs

The solution is not to rely on the automatic cron runs. Instead, you can set it up on the server as a cron job. This is not reliant on page requests, so will always run on time. And it will run in the background, so will not slow down page requests for users.

Setting up a cron job using cPanel

If you are using a system like cPanel, or Plesk, setting it a cron job is relatively straight forward. Head to the cPanel account for your site and look for Cron jobs under Advanced.

Cron jobs in cPanel

You will be presented with a page where you can add a new cron job. cPanel comes with some preconfigured settings that you can use. For example, you can select Once per day, and the form will get populated automatically.

Running cron every day

You then need to add the Command. This is the URL that will be requested to trigger cron to run. If you look in the root of your Drupal installation, you will see cron.php. The URL hits this file with the addition of a secret key. The secret key is a necessary security precaution to prevent other people from hitting cron.php on your site.

To get the cron URL with your secret key, go to Administration > Configuration > System > Cron. You will see the full URL next to “To run cron from outside the site, go to”

Cron from outside the site

In the Command field in the New Cron job field, add curl --silent followed by the cron URL.

Setting up cron in Linux

If you don't have cPanel, then you can setup cron jobs on the Linux command line. Checkout this great resource for more information: Schedule Tasks on Linux Using Crontab

Disable Drupal’s automatic cron

Now that you have it running on the server level, you can disable Drupal from running the cron. Simply change the Run cron every field to never.

Cron set to never run

Check that cron is actually running.

Wait until cron should run next. Then go back to the cron config page. If it has run successfully, you will see that the Last run has been updated.

Last time cron has run

You can also check the log messages, which will confirm that cron has completed.

Log message showing cron has run

And that is it! Cron will now run in the background according to the specified timeframes regardless of how much traffic your site gets. And as your site grows and gets more traffic, you don’t need to worry about cron slowing down page requests.

Dec 01 2015
Dec 01

The Picture module helps you to use Drupal's image creation capabilities to set up responsive images using the picture element. It's a backport of the Responsive Image module in Drupal 8 core.

A friend of mine creates an online photo advent calendar every year - I previously wrote about how I built the site, and ahead of this year's edition, I wanted to improve its performance. It's an image-based site, so responsive images were the obvious target. The other item on the hit list was inline critical CSS, which is pretty easy, thanks to this tutorial from Chris Ruppel of Four Kitchens.

With the help of a tutorial from ThinkShout I was able to get up and running with setting up breakpoints and responsive image styles.

However, for some custom theming requirements, I needed to do some fiddling with the markup around the images, so I couldn't just render the items as normal.

I was fairly familiar with using theme_image_style in preprocess_node to do this kind of thing with images, but couldn't find any documentation on achieving the same thing with the picture module.

Digging into the code, I could see that there is a theme hook called picture, which was clearly what I wanted - like theme_image, it takes an array of variables, with the file URI, width, height, alt, attributes, and crucially an array of breakpoints. Trouble was, I couldn't find any documentation on the expected format of those breakpoints, or where I could get the values. A quick look at the code of the breakpoints module yielded some likely-looking functions, but nothing that .

With a bit of poking around in the output of dpm (from the devel module) I was able to see that the render array for the field included a #breakpoints array, so I was able to pull that out and pass it in to theme_picture.

foreach ($vars['content']['field_images'] as $key => $value) {
  if (
is_numeric($key)) {
    if (
$value['#item']['type'] == 'image') {
// Extract and transform the image title.
$text = mytheme_text_transform($value['#item']['title']);
$value['#item']['title'] = ''; $picture_vars = array(
'uri' => $value['#item']['uri'];,
'breakpoints' => $value['#breakpoints'],
$picture = theme('picture', $picture_vars); $images[] = $picture . $text;
$vars['images'] = theme('item_list', array(
'items' => $images,
'attributes' => array(
'class' => 'day-images',

By way of comparison, here's the old version of the code which used theme_image_style:

foreach ($vars['content']['field_images'] as $key => $value) {
  if (
is_numeric($key)) {
    if (
$value['#item']['type'] == 'image') {
$image_vars = $value['#item'];
$image_vars['path'] = $image_vars['uri'];
$image_vars['style_name'] = 'large';   
$image = theme('image_style', $image_vars); // Extract and transform the image title.
$text = mytheme_text_transform($image_vars['title']);
$image_vars['title'] = ''; $images[] = $picture . $text;
$vars['images'] = theme('item_list', array(
'items' => $images,
'attributes' => array(
'class' => 'day-images',
Nov 30 2015
Nov 30

Entity Pilot 8.x-1.x-beta4 adds a new feature - the ability to easily sync a whole content-entity bundle between your Drupal 8 sites - e.g. all terms in a vocabulary; or all content for a given content-type.

Watch the short demo to see it in action.

This feature is ideal for custom install profiles where sites need the ability to easily import standard vocabularies. It allows you to create them once and store them in your private Entity-Pilot content repository, and then import them on new sites as desired.

Want to know what's coming next for Entity Pilot? See the roadmap.

Nov 30 2015
Nov 30

Last November, launched on Drupal and became one of the highest trafficked websites in the world to launch on an open-source content management system (CMS). Today, we're proud to announce that we have been recognized with a 2015 Partner Site of the Year Award from Acquia for our work on The award recognizes outstanding visual design, functionality, integration and overall customer experience for a media website. 

 This year’s winners exemplify what can be done with great technology, beautiful design, and a customer-first mindset...Ultimately, it came down to a hair-splitting exercise to determine which sites serve their audience’s needs in the most effective, seamless way possible.

-Scott Liewehr, CEO and Co-Founder of Digital Clarity Group

The Weather Channel's Journey to Drupal

The Weather Channel (TWC) teamed up with Mediacurrent in 2012 to begin the process of a full migration from their previous content management system to Drupal. 

By taking the time to understand the requirements and the impact of every decision, Mediacurrent planned a resilient architecture, allowing both parties to quickly respond to changing business needs without major disruptions. Our goals were to help TWC adopt an open-source solution and ensure that the new website also had drastically improved page load times and reduced infrastructure requirements. 

Mediacurrent's team rose to the challenge, creating 30 completely custom modules, 120+ custom features and custom code on the final web platform. In a recent blog series, Senior Drupal Developer Matt Davis detailed Mediacurrent's approach to architecting a custom "presentation framework" for TWC and shared how we increased content portability, improved page load times, and created flexible widgets across multiple devices. 

The Results now serves millions of pages to more than 100 million people each month. Weather’s move to Drupal shows how complex, content-rich sites benefit from an open, agile platform to deliver an amazing experience for every site visitor.

-Dries Buytaert, Acquia CTO and Drupal project creator

  • Increased feature velocity and scalability to support 100M unique visitors monthly
  • Streamlined editorial process by 175%
  • Improved cache efficiency from 50% pre-launch to 96.3% post-launch, with an increase to 99.6%  3 weeks post-launch

For a complete case study, you can read more about Mediacurrent's work on here or at  

Q&A With | Video
Migrating the Weather Channel to Drupal | Webinar A Novel Presentation Framework | Drupalcon 2015 Presentation Recording

Nov 30 2015
Nov 30

Mike Booth, Senior Cloud Software Engineer at Acquia, on tackling concrete problems, file systems, real world Drupal -- and the value of incremental improvement. Part 1 in a series.

Mike Booth started out in electrical engineering, earning a PhD at Cornell, where he trained in laser physics and semiconductor manufacturing. But for most of his career he’s worked in Web programming.

Booth says he’s tinkered with “every layer of the LAMP stack: AWS provisioning and configuration, Ubuntu package management, Apache, Nginx, Varnish, PHP, Ruby/Passenger, Git and SVN servers, MySQL replication, GlusterFS-based distributed file systems.” Mike was also a senior member of the team that designed, developed, and launched Acquia Cloud, the AWS-based platform-as-a-service for hosting Drupal-based websites.

Today, Booth’s background in experimental physics continues to influence his approach.

“Experimental physics is a wonderful subject,” he wrote. “You learn to approach theorists with the proper balance of respect and suspicion. You learn about double-stick tape, and when to apply it to a fifty-thousand-dollar laser system. You learn that no procedure or apparatus is too simple to fail. You learn that to make working things, you must practice the art of repairing broken things.”

Below, check out Booth’s thoughts on file systems and other topics. The interview was conducted and redacted by DC Denison, who interjects the occasional question or comment under the code name “Q.”

File Systems and Real World Drupal

Drupal is a very pragmatic platform, traditionally. It’s not stuck on academic considerations. Everything that’s there is there because someone put it there, because they needed it in production, or they needed it to do their job. There’s an active dialogue about publishing websites, as opposed to esoteric aspects of software design.

As a file system guy, I’m a consumer of the product. I’m not re-engineering filesystems at a deep level, but I have to try and look at them from our viewpoint and our customer’s viewpoint and find solutions to the real problems.

It’s like, “Well, in an imaginary world, I could snap my fingers and everybody could switch to Amazon S3 (Simple Storage Service).” But no. The project has momentum and the site is live now. We have earlier versions of the thing and they’ve all been designed around earlier assumptions. You have to resist that certain philosophical perk that you get way too much of in engineering: the start-up thing that says, “I want to revolutionize file systems.”

The Challenge of Working at Acquia

The most exciting thing about Acquia to me is that we operate at such scale with such important customers. We deal with real problems that are right in front of us. They are not theoretical. Important things are happening. We are working with real live stuff in production.

Engineers often say, “I’m fond of greenfield things.” The tendency is to say, “I want to work on something that has no customers, where there’s a completely green field and I can do whatever I want.”

Q. It’s like wanting to build a new house rather than renovating an existing one.

Yes, but there’s value in working with people where they are. It means that you end up in a world of compromises and you have to negotiate your way through complicated design problems. They’re less comfortable in the world than they are in your head. At the same time, I really value being able to engage real problems, to touch real things.

I can imagine a lot of elegant ways to design a file system. It’s an academic problem for computer scientists, and it can be super fun.

But in the end, there’s also something beautiful about, “Well, I have this concrete problem and I have to solve the concrete problem for concrete customers who are right in front of me.” You have to use your design tools to target real world websites that are happening right now.

Q. Can you give me an example?

Well, the problem of abstracting a file system. I want to be able to plug one or another of various different file system candidates into our Web hosting platform and be able to use them interchangeably. How well can I hide the background details from the higher levels of the architecture? It’s not an abstract problem. It’s not a problem that I can address in an ivory tower where no one is using it.

The Value of Incremental Improvement

One of the things that’s nice about software is that if you’re not trying to dominate the headlines, but actually work on it, there’s just a ton of stuff to do. You can really help people out. If we get some new feature working, it will really help developers out with their workflows. If we can improve the stability of something by one percent, that doesn’t sound like an exciting thing, but it’s real engineering.

I studied actual engineering in school, the kind where you build semiconductors. I built semiconductor lasers. And if the laser doesn’t work, you don’t get to graduate.

You end up obsessing about things like cleaning. It’s like, “How do I wash this?” and “Did I touch it with the wrong chemical at the wrong time?” It’s all very controlled, and it’s a part of a huge system.

Somewhere, billions of people are carrying phone parts that benefitted from my earlier work. The same sort of thing applies with the work I do at Acquia. It’s like, “Oh, I’ll make this change in the file system and thousands of websites run by companies that employ millions of people are going to enjoy better efficiency. They are going to load better.” The statistics on how Web performance can improve your business are very compelling.

Most of the world works that way. Real improvements happen in tiny pieces, but the pieces pile up. Anything you can improve, even if it seems like a little thing in the corner, has a beautiful downstream effect that is sort of inspiring.

Coming next, in Part 2: the backstory on Drupal's file system, the advantages of Gluster, object storage systems, and "the big gorilla in the room" (Amazon S3).

Nov 30 2015
Nov 30

We are offering five community trainings on Friday, January 29 at 701 Carnegie Center. Registration will open at 9 am with classes starting around 9:30 and running until the end of the day. Breakfast and lunch will be provided for the low cost of your ticket.

For full training descriptions, view the training page.

Introduction to Drupal Best Practices and Development Workflows
Trainer: Mike Anello, DrupalEasy

Becoming an Awesome Drupal Project Manager
Trainer: Ray Saltini, FFW

Drupal 8 Module Development
Trainer: Ted Bowman, Six Mile Tech

Drupal 8 Migration
Trainer: Ryan Weal, Kafei

Designing Web Sites with Drupal 8
Trainer: Daniel Schiavone, Snake Hill

Nov 30 2015
Nov 30

The primary goal of the project was to replace the website of Venlo with a user-friendly, top-tasks based website using the DvG distribution. The requirement to use DvG was already included in the tender. Not only did they want to use the distribution, they wanted to improve it too. We were asked to share our improvements and bug fixes with the community to make the project even better.

Dutch design agency Freshheads created the graphic design. They were given the task to create a sophisticated design (based on the DvG design) that highlighted Venlo’s character.

The project was executed using the agile project methodology in only 12 weeks of development time (4 sprints of 3 weeks each).

A top task website

Top Tasks Management is a model that says: “Focus on what really matters (the top tasks) and defocus on what matters less (the tiny tasks).” For inhabitants, these tasks include: getting a (new) passport/driver's license, handling moving houses, migration, birth and death, and similar tasks. To do any of these things you have to make an appointment and visit the town hall. That means ‘making an appointment’ is this website's number one task. Other information, like opening hours, news and contact info, is moved deeper/lower within the website.

Some might find a top task website a bit boring, since it has no frills like slideshows, twitter streams and blogs. On the other hand, the website is easy to use and the main tasks like making an appointment can be done within a few clicks. For more information on the idea behind a top task website, read “What Really Matters: Focusing on Top Tasks” by Gerry McGovern.

Outcome of the project

We planned four development sprints to create the website.

Sprint 1 focused purely on implementing DvG and creating a Drupal theme to match the design. We decided to create a sub theme of the theme that came with DvG. Most of the templates and javascripts were already taken care of by the DvG base theme so we mainly focused on writing good stylesheets.

The other three sprints were used to implement the needed system integrations, create an advanced search box, and to make a trash removal module. Although the actual building phase of the project only took 12 weeks – 4 sprints of 3 weeks – the entire project took about 7 months. As in most cases, the runtime of a project like this was largely determined by the process of creating content. In this project, the team took the approach of writing the content using experienced web editors. Their primary focus was to create content that really contributes to the needs of the users and is accessible (according to the WCAG guidelines) as well. The result of the content was discussed with the various departments in the organization. Their role was simple but effective; is the presented content correct and actual. Despite the fact that they only had to react on actual content instead of writing it themselves, these are time consuming processes.

As said, the building phase took 4 sprints of 3 weeks. Each sprint was roughly planned as follows: one week build, one week refinement and one week testing and bug fixing. In close cooperation with the project team the test week was – when appropriate – accompanied by specialists (eg. WCAG specialists). This has led to a situation where (basic) improvements were addressed in an early stage of the project. To enable proper decision making, the product owner was a member of the client’s project team. In our opinion this is mandatory to ensure commitment of the client, quality, delivery and timelines. Extensive use of on-line tooling ensures swift and efficient communication.

The new site has now been live for about one month, so it’s a bit premature to identify a direct switch between on- and offline services. One of the reasons is that the expectation of proper online services, offered by municipalities, is rather low. Similar projects show that it takes about a year before a significant change from offline to online is established. That being said, the initial results are promising. There are practically no complaints and bounce rates are low. A dedicated, well-trained online team is in place to monitor the performance of the current site. There is still work to be done - not only tweaks on the current deploy but also development of other new services.

The new website is fully responsive and accessible and it looks great on all devices. At LimoenGroen we find that the experience for a web user should be equal for everyone, whether you use a desktop, mobile or screen reader. It has been tested with screen reader software by a blind developer and a lot of accessibility improvements have been made.

In terms of build tools we relied on Sass, Gulp, Vagrant, Ansible, Jenkins and Git.

Nov 30 2015
Nov 30

Under normal circumstances, accessing government data, creating hacks with it -- even making products with it -- would be seen as irresponsible, and possibly criminal.

Once a year, however, a multitude of Australian Government departments across federal, state and local levels get together to do all three, and the push the government further towards open data.

The event, GovHack, was first conceived in 2009 as a Gov 2.0 Taskforce project, with the directive to take data from government APIs and use skills in Web/app development, hardware engineering, design, and UX to create a more useful and usable experience for Australians to consume. Since then it has evolved into an event that is entirely community- and volunteer-run, with an international presence and thousands of participants annually.

Acquia become involved in 2014, when the events were held in Canberra. Acquians helped run both the local events and the hacks themselves. As a company, we stepped it up another notch for the 2015 event taking on the roles of both silver sponsorship and mentoring the event nationwide.

As a solutions architect, I work with customers to establish technical solutions, providing guidance about capabilities, and how to leverage tools effectively. As a GovHack mentor, I took that same approach to the teams competing, assisting them in thinking up novel ways to use government data, help engineer displays, and architect the final implementations.

I worked with teams using Drupal, other frameworks, and entirely custom application code. It was exciting to see the innovation arising from inspired teams that had open slather (Australian for "complete freedom") to play with open government data.

For teams using Drupal, we were able to provide the use of the full Acquia Cloud platform for their projects, enabling them to focus on module development and theme design rather than the provisioning of an entire infrastructure stack.

The results

The culmination of this intense 48 hours of hacking was reached a month after the teams started working, with the bestowing of awards at the GovHack red carpet event.

Acquia and Microsoft worked collaboratively through all Public Service team entries to pick the winner in our sponsored category. Finally coming out on top for us was Pick a Park by team "Feature Creeper and the Creeps." Not only was the submitted entry a good example of what a minimum viable product should look like, but it also showed an understanding of the data sets used in the build, rather than just a blind application of data to functionality.

Great to hear also was the news that two Acquia-mentored projects -- True Stories by team "Potato Heads," and Care Factors by team "Social Hackers" -- were runners up in their respective categories.

From here

As a GovHack sponsor and mentor, it was great fun to get involved in some of the hacks that were built, not only for the duration of the event, but also afterwards. Acquia is still providing a highly available best-of-breed Drupal platform for the Care Factors team, and I’m still working with the True Stories team on their extension and platform infrastructure. Even though it’s not Drupal, the use case is compelling enough for me to think it has real potential.

As both a mentor, a technologist, and a GovHacker myself, I’m looking forward to next year's GovHack 2016, where we can enable more hacks, push for a greater degree of open data -- and improve the user experience and useability of government websites as we do it!

Nov 30 2015
Nov 30

Under normal circumstances, accessing government data, creating hacks with it -- even making products with it -- would be seen as irresponsible, and possibly criminal.

Once a year, however, a multitude of Australian Government departments across federal, state and local levels get together to do all three, and the push the government further towards open data.

The event, GovHack, was first conceived in 2009 as a Gov 2.0 Taskforce project, with the directive to take data from government APIs and use skills in Web/app development, hardware engineering, design, and UX to create a more useful and usable experience for Australians to consume. Since then it has evolved into an event that is entirely community- and volunteer-run, with an international presence and thousands of participants annually.

Acquia become involved in 2014, when the events were held in Canberra. Acquians helped run both the local events and the hacks themselves. As a company, we stepped it up another notch for the 2015 event taking on the roles of both silver sponsorship and mentoring the event nationwide.

As a solutions architect, I work with customers to establish technical solutions, providing guidance about capabilities, and how to leverage tools effectively. As a GovHack mentor, I took that same approach to the teams competing, assisting them in thinking up novel ways to use government data, help engineer displays, and architect the final implementations.

I worked with teams using Drupal, other frameworks, and entirely custom application code. It was exciting to see the innovation arising from inspired teams that had open slather (Australian for "complete freedom") to play with open government data.

For teams using Drupal, we were able to provide the use of the full Acquia Cloud platform for their projects, enabling them to focus on module development and theme design rather than the provisioning of an entire infrastructure stack.

The results

The culmination of this intense 48 hours of hacking was reached a month after the teams started working, with the bestowing of awards at the GovHack red carpet event.

Acquia and Microsoft worked collaboratively through all Public Service team entries to pick the winner in our sponsored category. Finally coming out on top for us was Pick a Park by team "Feature Creeper and the Creeps." Not only was the submitted entry a good example of what a minimum viable product should look like, but it also showed an understanding of the data sets used in the build, rather than just a blind application of data to functionality.

Great to hear also was the news that two Acquia-mentored projects -- True Stories by team "Potato Heads," and Care Factors by team "Social Hackers" -- were runners up in their respective categories.

From here

As a GovHack sponsor and mentor, it was great fun to get involved in some of the hacks that were built, not only for the duration of the event, but also afterwards. Acquia is still providing a highly available best-of-breed Drupal platform for the Care Factors team, and I’m still working with the True Stories team on their extension and platform infrastructure. Even though it’s not Drupal, the use case is compelling enough for me to think it has real potential.

As both a mentor, a technologist, and a GovHacker myself, I’m looking forward to next year's GovHack 2016, where we can enable more hacks, push for a greater degree of open data -- and improve the user experience and useability of government websites as we do it!

Nov 30 2015
Nov 30

Your website can be a complex and often misunderstood aspect of your digital branding strategy. In this article, I intend to dispel some of the more common myths that we encounter in our conversations with our clients.

Myth #1:  Your Website Is *for* Your Organization

A lot of work goes into the planning of a website, and sometimes during that planning process it is easy to think about what you want out of your website. While knowing your business goals are important, and accounting for that throughout the design process is crucial, it’s also vital to realize that your website is not necessarily for you, it’s for the people who know nothing about you.

In a lot of cases a customer's first contact with your business is through your website. Knowing that it’s important to always keep in the back of your mind what the customer is looking for when they get there? Working with a reputable firm to help you determine visitor intent will allow you to create a content strategy geared towards moving a user through the buying cycle.

Myth #2: Once It's Built, It's Done

This is unfortunately something that we see happen all to often. A client will go through the process of having a website redesigned, and then once it’s launched that’s it, it’s done. This could not be further from the truth. While getting a website launched can take a lot of work, the work doesn’t stop once it’s live. Performing updates to the underlying architecture, evaluating and tweaking the layout and content strategy and optimizing for search engines and long tail keywords are some examples of things that should be worked on after the site launch. Think of your website as a living organism that must constantly adapt and evolve to stay relevant.

Myth #3: It's Cheaper to Build It Yourself

For those more technically inclined, it is tempting to think that it will be easier to build a website yourself then to have a firm do it. While you will most definitely save money, it takes a lot more than simply technical expertise to build a website. I say this because what determines the success of a website happens well before a single line of code is written. The website discovery process involves analyzing your current site’s weaknesses in usability, architecture and SEO. It also involves sitting down with a firm and determining user personas, what they intend to get out of the website, and how you can design the site to help with that. It involves knowing what the industry best practices are, understanding how the users use the web and formulating a plan to increase your sites conversion rate. Every conversion lost is money lost to your business and so while you may save some upfront cost on your website build, you will most definitely lose out long term when it comes to your company’s bottom line. Even worse

Myth #4: Ranking #1 Is Important

This is a common myth that we hear all the time regarding SEO. While, the number one listing will get the highest CTR (click through rate), a bigger thing to focus on is ranking for keywords that accurately describe visitor intent. To help describe this, let’s use the LightSky site as an example.

If you search for “LightSky” in google, we show up. Not totally unexpected considering if you are searching for “LightSky” we would be the most relevant result. But let’s take a moment to think about who would be searching for “LightSky”. For the most part, it would be people who already know about us. Since we don’t do any major form of advertising, this likely means our existing customers or people that we work with normally.

While we enjoy having our customers come to our website, what we are trying to do with organic SEO is to bring leads to our site for people who may not know of us. Because of this we target keywords like “Web Design Madison” or “Wisconsin SEO Consultant”. These keywords better reflect searcher intent and are more likely to result in a website conversion.

Myth #5: You Should Use a Templated Solution

There is no argument to the fact that websites are expensive, and so it’s easy to see the allure of using a firm that offers a templated solution. Why is this a bad idea? Well, for starters, your website, like your business, should be unique. Keep in mind that for a lot of your customers, your website is your first impression and so it is important that you are able to quickly articulate what your business does, and provide the user with the information they are seeking.

In addition to answering the user’s question, “What am I looking for?” a custom theme allows for the ability to really target conversions in a way that is often times limited by a template. Having a custom theme means that you are going to be able to really dictate what the user sees as they navigate your businesses sales funnel.

So what do you think about those myths? Are there any that I missed or that you feel should be listed? Leave a comment below!


Nov 30 2015
Nov 30

Meet Drupal Developers from Four Kitchens

This interview is part of an ongoing series where we talk with a variety of people in the Drupal community about the work they do. Each interview focuses on a particular Drupal role, and asks the individuals about their work, tools they use, and advice for others starting in that role. You can read all of these interviews under this list of Drupal roles posts.

Interested in learning how to become a Drupal developer, too? Check out our role-based learning pathway: Become a Drupal Developer.

Jon Peck

Jon Peck is the Senior Engineer at Four Kitchens. He's also a systems administrator and educator. He loves working on the backend of big enterprise sites with a focus on architecture and optimization, as well as playing keyboard in a progressive rock band.

Where to find Jon

David Diers

David Diers is an Engineer at Four Kitchens. Prior to his current position, he worked for many years in academic and administrative IT at the University of Texas at Austin, where he earned a master’s degree in music composition.

Where to find David

How do you define the Drupal developer role?

  • JP - Someone who analyzes and interprets needs, determines the best solution, implements, and then reviews results. Within Drupal specifically, they have a broad understanding of how Drupal interacts with itself (request handling, hooks, theming, and so forth) and they know how to seek out deeper knowledge.

  • DD - Someone who is a Drupal developer is well versed in site building, and custom code, and knows when it's best to build or configure. They have an understanding of how Drupal works and approach problem solving in native Drupal fashion — all the while ensuring an extensible and flexible approach.

What do you currently do for work? What does your daily routine and work process look like? What kind of tasks do you do on a daily basis?

  • JP - Right now, I'm the architect of two publications that will be implemented in Drupal, including the migration from multiple legacy systems. I'm also consulting on performance and site auditing. I work from home; my day consists of occasional meetings (mostly via Zoom), development and documentation, and discussions via Slack. Projects are managed using JIRA, and code is in GitHub or Stash (depending on the client).

  • DD - Currently, I am working with a major media company to unify a large number of disparate Drupal sites and find ways of abstracting the approach in Drupal 8 so that truly diverse approaches can be accommodated within a single definitive content model. In recent years I've been doing a lot of strategic work, analysis and architecture, but depending on the project I could be doing a lot of gnarly development, deep in the code — plugins, migrations and database work.

What do others look to you to do on a project?

  • JP - I provide experience and historical perspective, along with recommendations about how to resolve difficult issues and continue to grow. I present myself as a collaborative resource both within the team and to whomever I'm working with. Also, I have the ability to translate non-technical requests into actionable development.

  • DD - I tend to get called in on tough SQL problems, migrations, and custom plug-in work on the technical side. On the more holistic side, I think my teams trust that I am going to bring a balanced viewpoint, and a deep investment and understanding of the business from the client perspective. I tend to fall in love with people and missions instead of the tech and tools and that's a good balance to have on a team of technology forward folks.

What would you say is your strongest skill? How have you honed that skill over the years?

  • JP - I can collaborate across groups in such a way that the conversation is centered around the common goal, not an “us vs. them” conflagration. It's something that I've had to consciously work at; finger pointing is easy, but swallowing pride and saying “we messed up and here's how we're going to fix it” is easier to write than to do :-)

  • DD - I listen well. I used to call it intuition or gut, but over time I realized it wasn't about a feeling with mysterious origins, it was all based on things I heard or didn't, essentially, it was about listening. Listening is about what is said but it is also hearing the silence. A sentence with a lot of gaps has just as much to say as one with a lot of words — you just have to know how to interpret it.

How did you get started on this career path?

  • JP - I'd been a PHP developer for many years, working with several open-source platforms and frameworks before focusing on Drupal. I have found success specializing in specific areas and having a broad knowledge in many others, and the Drupal community drew me in ways that are unique and refreshing.

  • DD - I went to school for music but had developed an interest for technology. It seemed like a good path to support my artistic career. I caught a break with a company in Houston who saw my potential and my willingness to put in a ton of effort to making a good product. Over the years, the one thing I keep coming back to is that this career really provides a lot of opportunities to learn something new almost constantly. For someone who loves learning, it's been a great choice.

What is most challenging about being a Drupal developer?

  • JP - Adapting to the Drupal mindset, some of which (prior to Drupal 8) was entrenched in “it's always been done that way”. Finding consistent quality documentation in edge cases is typically a challenge.

  • DD - The community is really broad and egalitarian. There are a lot of perspectives and approaches which tend to get pretty equal treatment and time in the sun. That is cool. However, solutions aren't truly equal in most cases, and as a beginning developer and even a senior one — drawbacks aren't always brought to the surface — so it really takes some efforts to identify the merits of a particular approach over another and to discuss that, outside of how these solutions have been facilitated in code or community.

What are your favorite tools and resources to help you do your work?

  • JP - The people I work with, both within 4K and across the open-source community. On my workstation itself, PhpStorm, Drush, Drupal Console, gulp, and site_audit.

  • DD - My team members are my biggest resource. I've learned a lot and hopefully helped a lot — both are valuable to your growth as a developer. For tools, I probably couldn't do what I do without a debugger and access to Drupal docs.

If you were starting out as a Drupal developer all over again, is there anything you would do differently?

  • JP - [I would have] registered on when I first started using Drupal and contributed more back to the community. Playing catch-up now!

  • DD - Start writing earlier and more consistently. It's a wonderful way to take that dynamic of teaching and learning out into the world. People are pretty free with their opinions on the internet — it can be nice or not, but you will always learn something.

What advice do you have for someone just starting out as a Drupal developer?

  • JP - Present at Drupal User Groups, Camps, and Cons! My personal Drupal "break-out" moment was a direct result of presenting at DrupalCamp Western NY 2011; it made many introductions and opened many doors.

  • DD - Try not to make your first Drupal project the big one. Find something small, with very modest needs and do that first; it will pay off when you finally do get to the big one. Go to events, camps, and DrupalCon. Take some training and talk with folks. Most importantly, dig into core and the main modules source code — it's really all happening there. The sooner you understand what's going on, the better.

Nov 30 2015
Nov 30

I have been on a performance kick lately, trying to optimize the delivery of caching and delivery of pages to users from Drupal.  It seems like the biggest piece of every site I work on are the images.  While Drupal Image styles allow us to create the exact size we want, I was looking for a way to better optimize the image size, make progressive jpegs, and keep the rendered images looking as close to the original as possible.

My go-to tools for performance testing the output of any site are:

Each of these tools will analyze the URL you input, and give you actionable items that you can take to optimize. provides us the most detailed information about images and how they effect the page load.

I have been doing a lot of research about optimizing content delivery lately.  My reading about optimizing images led me to this incredibly detailed article on Smashing Magazine, Efficient Image Resizing With ImageMagick by Dave Newton.  The author describes in great detail how to optimize images for the web using ImageMagick, some thing I immediately had to dig into, after I re-read Mr. Newton's article a few times.

Drupal ships using the GD Toolkit that is almost always enabled in PHP.  To use ImageMagick, we are going to need to ensure that

  1. ImageMagick is installed on our dev, staging, and production environments.
  2. We install, enable and patch the Drupal ImageMagick module.

Installing ImageMagick for Drupal

/* Installing ImageMagick on Ubuntu */
sudo apt-get install imagemagick

/* Installing ImageMagick on Mac OS using Homebrew */
/* If you develop locally on a Mac */
brew install imagemagick

/* Confirm the installation and install location */
/* You will need this path to enter into Drupal */
which convert

/* Install the Drupal module */
drush dl -y imagemagick
drush en -y imagemagick imagemagick_advanced

Configuring ImageMagick for Drupal

Now that you have ImageMagick, and the Drupal module installed you can configure.  In the configuration section of the admin:

  • Configuration > Media > Image Toolkit
  • /admin/config/media/image-toolkit

We now have the ability to select the ImageMagick toolkit instead of the default GD toolkit.  After you select ImageMagick, you need to enter the "Path to the 'convert' binary" which we figured out earlier when we typed "which convert".  Usually this is:

  • /usr/local/bin/convert (Mac)
  • /usr/bin/convert (Ubuntu)
  • C:\Program Files\ImageMagick-6.3.4-Q16\convert.exe (Windows)

The default configs we want to set, going back to the optimized settings that Dave Newton laid out in the Smashing Magazine article, are:

  • Image quality - 82%
  • Change image resolution to 72 dpi - Yes
  • Convert colorspace - sRGB
  • Color profile path - Leave blank to use the sRGB colorspace defined above.

Optimizing the Image styles

Thanks to the ImageMagick Advanced module, we also get an additional setting that we can set on the individual Image styles called "Strip Metadata".  This will remove all of the image's metadata helping reduce size immensely according to the guys on Talking Drupal Episode #107.

Strip Metadata preset

Patching ImageMagick Advanced Drupal module for additional features

There are two patches available that give us more ImageMagick functionality, and more control over our images.

  • Option for progressive jpeg - This issue, and patch gives us the option in the ImageMagick configuration to set the "Interlacing method" to Line, or Plane which will create interlaced GIFs or progressive JPEG images.

  • Additional image styles with ImageMagick Advanced - This issue, and patch gives us additional options in Image styles.  The main addition to pay attention to here is the "Sharpen" preset, but you can also apply blur, drop shadow, and perspective transform if you so desire.

I prefer to download patches to /sites/all/patches/ and apply them from there.  This gives the development team a consistent place to look to see if patches are applied.  Download both patches and then CD into the ImageMagick folder:

/* Navigate to the module folder */
cd sites/all/modules/contrib/imagemagick/

/* Apply the patches */
patch -p1 < ../../../patches/option_for_progressive_jpeg-1883192-4.patch
patch -p1 < ../../../patches/additional_image_styles-1810390-5.patch

You will now get the ability to set the "Interlacing method" in ImageMagick configurations, and have an additional Image style option of "Sharpen" to apply.  

The Results

Commit, deploy and configure these changes on your production server and run your speed tests again.  You should see a vast improvement in image size thanks to the Image quality, Strip Metadata, Image resolution and colorspace.  Progressive images may increase your size slightly, but your users will perceived a speed increase increase as the images will start to render immediately, instead of waiting to completely loading to render.  Sharpen will help keep the quality of the original image in the smaller format.


Photoshop - 423KB:

Smashing Magazine Article ImageMagick settings - 236KB:

Drupal ImageMagick settings - 269KB:

While the Smashing Magazine article settings are by far the lowest at 236KB from the original 1.9MB file, the settings that I was able to configure using the ImageMagick Drupal module and provided patches are only only about 10% larger at 269KB.

Compare this with the Photoshop "Save for web" settings at 60% and progressive which comes out at 423KB and you now have some considerable savings, in a progressive file, that looks just as good as the original image in a web browser.

What are your favorite ways to optimize images in Drupal 7?

Nov 29 2015
Nov 29

Donna Benjamin (kattekrab) joins Mike Anello, Andrew Riley, and Anna Kalata to talk about funding free and open-source software (FOSS), the D8 Accelerate Program, and the launch of Drupal 8!


DrupalEasy News

  • The next session of the 12-week Drupal Career Online course starts March 7, 2016 - visit for all the details.

Drupal Association

Three Stories


Picks of the Week

Upcoming Events

Follow us on Twitter

Five Questions (answers only)

  1. Looks at rocks.
  2. Toggl.
  3. Setting a goal.
  4. A wallaby.
  5. DrupalCon San Francisco.

Intro Music

Drupal Way - by Marcia Buckingham (vocals, bass and mandolin) and Charlie Poplees (guitar). The lyrics by Marcia Buckingham, music by Kate Wolfe.


Subscribe to our podcast on iTunes or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Nov 28 2015
Nov 28



The keynote at DrupalCamp Bulgaria was planned to be left field from the get go, however it went a little further out after Paris came under attack on the night of the 13th of November 2015.

#JeSuiBaghdad #JeSuiParis #JeSuiBeirut #JeSuiChibok #JeSuiKarachi #JeSuiMadrid #JeSuiDamascus #JeSuiAnkara #JeSuiLondon Dalai Lama Quote 2015

Dalai Lama Quote 2015

#JeSuiMali the list goes on, but other than on the bench solidarity what are we doing as individuals, as a community to facilitate and help build a better safer, cohesive and a pluralist society?

As a FOSS community we are constantly talking of give-back but are we engaged enough?

How could we take the strengths and learnings that make us a successful tech community to wider non tech audiences with a view of creating social transformation that addresses the needs of our societies in these turbulent times. What can we learn from the transformation FOSS and the Cloud has had on our ecosystem as technologists and how can we export that beyond tech to heal and build a stronger society?

I have more questions for discussions than answers however there is an inflight and successful start made by Peace Through Prosperity using Agile, Open source and Cloud to deliver social transformation programs that could be a starting point for the Drupal community to engage with in their own geographies. The open source component of this program is in development and work in progress can be seen here, if you’d like to contribute and #GiveBack beyond our bubble please get in touch over Twitter or Linkedin.

Links shared within the keynote:

The presentation from the Keynote:

Nov 27 2015
Nov 27

One thing that is exciting to me, is how much we appear to have gotten right in Drupal 8. The other day, for example, I stumbled upon a recent article from the LinkedIn Engineering team describing how they completely changed how their homepage is built. Their primary engineering objective was to deliver the fastest page load time possible, and one of the crucial ingredients was Facebook's BigPipe.

I discussed BigPipe on my blog before: first when I wrote about making Drupal 8 fly and later when I wrote about decoupled Drupal. Since then, Drupal 8 shipped with BigPipe support.

When a very high-profile, very high-traffic, highly personalized site like LinkedIn uses the same technique as Drupal 8, that solidifies my belief in Drupal 8.

[embedded content]

LinkedIn supports both server-side and client-side rendering. While Drupal 8 does server-side rendering, we're still missing explicit support for client-side rendering. The advantage of client-side rendering versus server-side rendering is debatable. I've touched upon it in my blog post on progressive decoupling, but I'll address the topic of client-side rendering in a future blog post.

However, there is also something LinkedIn could learn from Drupal! Every component of a LinkedIn page that should be delivered via BigPipe needs to write BigPipe-specific code which is prone to errors and requires all engineers to be familiar with BigPipe. Drupal 8 on the other hand has a level of abstraction that allows BigPipe to work without the need for BigPipe-specific code. Thanks to Drupal's higher-level API, Drupal module developers don't have to understand BigPipe: Drupal 8 knows what page components are poorly cacheable or not cacheable at all, and what page components are renderable in isolation, and uses that information to automatically optimizes the delivery of page components using BigPipe.

Drupal's BigPipe support will benefit websites small and large. But it is exciting to see Drupal support the advanced techniques that were previously only within reach of the top 50 most visited sites of the world!

Nov 27 2015
Nov 27

27 Nov 2015

In an organization consisting of various discrete software systems, the ability of Drupal to integrate with the enterprise and third party applications is not a niche but the basic building blocks of developing a robust functional system.

Hence, Drupal has seen significant adoption and acceptance among enterprises. Its innate strength to integrate with the third party applications and systems catering to varied verticals and industries has been proven time and again. Drupal’s highly modular and scalable architecture is what makes this possible. At times Drupal needs to be integrated with connectors or adapters which act as a critical component to the integration architecture.

Now, with the release of Drupal 8, Drupal’s strength to integrate with the third party systems has even more enhanced. Drupal 8 has native support for integrations as there are four widely popular web services modules in Drupal core, namely RESTful web services, Serialization, Hypertext Application Language (HAL) and HTTP Basic Authentiation. With API first publishing, the possibilities to use Drupal to expose content via JSON and XML are almost limitless. Full decoupling of the backend is a feature that is going to find numerous applications when it comes to third party integrations.

Some of the verticals with which Drupal integrates have been highlighted below.

Drupal and ERP Systems

ERP (Enterprise resource planning) is the backbone software of an organization and integrates with all the other systems used in the organization. Drupal has the capability of integrating with almost all the well-known ERP systems available with ready to use generic modules available for integration implementation. If not, Faichi has the expertise needed to write the custom module to implement the same. Drupal integrates with LDAP for authentication processes and takes care of all the security potential threats that may arise in the process. In the article here ( ) we have highlighted how Drupal as a system and as a community takes care of security threats. Some of the often used ERP systems are SAP, Sharepoint ( ), Netsuite ( ) etc. When integrating Drupal with SAP ERP, Java CAPS, the enterprise service bus software suite from Oracle is used as a connector in the integration process to facilitate in the service oriented architecture environment. 

Drupal and Customer Services Vertical

Drupal integrates with the major globally known customer service products and CRMs as well. This kind of integration can be used to provide a seamless login experience to the users and display the forms at Drupal site, the data of which can be used to fetch at the CRM system. Integration can be used to fetch any information from the CRM system to the Drupal site and vice versa. The entities in Drupal are mapped with the objects in the CRM system to execute the integration. Faichi has the expertise of building both the Drupal site as well as the CRM systems which makes us a good fit to carry out such integrations seamlessly. Some of such well known products of customer services vertical are Zendesk, SugarCRM, Salesforce, ExactTarget, Hubspot, Adobe Test and Target, Janrain etc.

Drupal and EHR Systems

EHR (Electronic Health Record) is electronically storing all the health information of the patients in the digital format to be used by other information systems. Usage of EHR has constantly been increasing in the healthcare industry and rightly so. Integration of Drupal with the EHR system can help in creating a robust system with the strong content management system capabilities coupled with the innate strength of EHR for patients’ documents management. Some of the popular EHR systems in use that can be integrated with Drupal are NueMD, Meditouch, e-MD, PHI etc.  

Nov 27 2015
Nov 27

For the past six years I've run a company that takes good care of it's employees, has happy customers and has financials for sustainable profitable growth. Wunderkraut has grown approximately 1000% organically during the past five years and on top of this we've done mergers and acquisitions. Today the company is closer to 200 people and hundreds of happy customers all over Europe.

The future looks bright at the moment. All of our core digital markets in Europe look great, Drupal 8 is finally out, hiring is relatively easy and we've managed to grow even in the challenging markets of 2015. The company is in good shape to continue steady growth for the future years.

Wunderkraut will continue operating as a network of independent and cross functional teams working together to reach common goals. This structure makes the company very resilient and innovative, it combines many benefits of smaller startups with the scale of a larger company. It’s truly self organising, the company really doesn’t need a CEO at this point.

This means I'm able to take a look at new opportunities for the first time in a long time: Get more involved with startups, help professional services organisations to improve, help large organisations adopt agile management and improve their digital offerings, and so on.

I'm excited to have the opportunity to take the lessons learned in the past 20 years in the digital industry and put them to good use at a different setting. My future plans are still mostly open at this point, 2016 will be a great year to both myself and Wunderkraut!

Nov 27 2015
Nov 27

CKEditor is a superb WYSIWYG text editor and when used together with the CKEditor module for Drupal, it's a great solution for enabling editors to easily add HTML to content. A common issue site builders seems to have is changing the list of styles that are available in the Styles drop down combo. Here's a salient tip for getting it working!

Photo of Chris Maiden Sat, 2015-10-24 12:26By chris

CKEditor comes with a Styles drop down combo that enables editors to select some text and then decorate it by selecting from a list of available styles:

Screenshot of CKEditor showing Styles drop down

These styles might be everything you need in which case, you're home and dry!

In my case, I wanted editors to be able to pick from just the following styles: paragraph, heading 3 through heading 6. That's all.

The method for customising the styles presented in the drop down combo is to copy the ckeditor.styles.js (or styles.js) file from the CKEditor library to your theme (or somewhere else) and to configure the Drupal CKEditor profile settings to pick up the new file:

Screenshot showing CKEditor profile set to use styles from the theme

If you, like me, have taken the styles.js file from the CKEditor library itself (CKEditor version 4.5.4) and have placed this in your theme directory, you might have got the following blank styles drop down:

Screenshot showing the blank styles dropdown

What's going on? Well, the solution is easy, once you know it!

All that is needed is to edit the JavaScript file you've copied to your theme and change the following line:

CKEDITOR.stylesSet.add( 'default', [


CKEDITOR.stylesSet.add( 'drupal', [

and that's it. Oh, you might need to clear your browser cache or alternatively append a query string to the JavaScript file to bust the browser cache and that is actually it!

Nov 27 2015
Nov 27

Recently I had to upgrade someone's Apache Solr installation from 1.4 to 5.x (the current latest version), and for the most part, a Solr upgrade is straightforward, especially if you're doing it for a Drupal site that uses the Search API or Solr Search modules, as the solr configuration files are already upgraded for you (you just need to switch them out when you do the upgrade, making any necessary customizations).

However, I ran into the following error when I tried loading the core running Apache Solr 4.x or 5.x:

org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: org.apache.lucene.index.IndexFormatTooOldException: Format version is not supported (resource: MMapIndexInput(path="/var/solr/cores/[corename]/data/spellchecker2/_1m.cfx") [slice=_1m.fdx]): 1 (needs to be between 2 and 3). This version of Lucene only supports indexes created with release 3.0 and later.

To fix this, you need to upgrade your index using Solr 3.5.0 or later, then you can upgrade to 4.x, then 5.x (using each version of Solr to upgrade from the previous major version):

  1. Run locate lucene-core to find your Solr installation's lucene-core.jar file. In my case, for 3.6.2, it was named lucene-core-3.6.2.jar.
  2. Find the full directory path to the Solr core's data/index.
  3. Stop Solr (so the index isn't being actively written to.
  4. Run the command to upgrade the index: java -cp /full/path/to/lucene-core-3.6.2.jar org.apache.lucene.index.IndexUpgrader -delete-prior-commits -verbose /full/path/to/data/index

It will take a few seconds for a small index (hundreds of records), or a bit longer for a huge index (hundreds of thousands of records), and then once it's finished, you should be able to start Solr again using the upgraded index. Rinse and repeat for each version of Solr you need to upgrade through.

If you have directories like index, spellchecker, spellchecker1, and spellchecker2 inside your data directory, run the command over each subdirectory to make sure all indexes are updated.

For more info, see the IndexUpgrader documentation, and the Stack Overflow answer that instigated this post.

Nov 26 2015
Nov 26
Tags: Drupal, drupal8

In this blog post we will take you though all the components required to provision a high availability Drupal 8 stack on Microsoft Azure. This is an extract from the demonstration given at Microsoft Ignite on the Gold Coast in November 2015.

What is Drupal?

Drupal is content management software. It's used to make many of the websites and applications you use every day. Drupal has great standard features, like easy content authoring, reliable performance, and excellent security. But what sets it apart is its flexibility; modularity is one of its core principles. Its tools help you build the versatile, structured content that dynamic web experiences need.

More information

aGov for Drupal 8

aGov is a totally free, open source Drupal CMS software distribution developed specifically for Australian Government organisations. Easy to install and configure, this pre-packaged Drupal CMS complies with all Australian Government standards and provides a full suite of essential website management features. It can be used out of the box for basic websites, or customised with any standard Drupal 8 module to deliver large scale, complex government platforms.

More information


To replicate this demo you will need to following tools installed:


So why do we want a high availability (HA) architecture? Why can’t we just deploy a single Virtual Machine (VM) application? Both of these questions come down to Azure’s service level agreements (SLA), to get a 99.95% SLA on Azure you will need to implement an architecture which implements 2 hosts at a minimum. But don’t fear, because our site is deployed in a high availability architecture doesn’t mean we have to make things difficult. For this demo we are going to stick to the following 3 principles:

  • Use services instead of custom deployments of services eg. Databases and file storage.
  • Keep our complex areas as simple as possible eg. Scripts over tools. You may choose a tool that suites your need in the future eg. Deployment tools, but for now we will keep it as high level as possible.
  • Implement as much open source as possible.

The diagram below illustrates the areas which we are going to focus on for this demo and how they interact.

  • Resource groups
  • Database
  • File storage
  • VM’s
  • Networking (endpoints)
  • Scaling and Availability sets

Resource Groups

Azure is very focused on providing the high level tooling for deploying applications, without the requirement for the end user to know anything about the low level architecture eg. networking and replication. The core fundamentals that resource groups provide us are:

  • Access control
  • Billing management
  • Global architecture management

So how do I create a resource group? These are easily created via the Azure UI when creating new resources. For the purpose of this demo we will create one at the same time we create our database backend, then add resources to it via the same UI as we create additional resources.


The database is the primary storage backend for persistent data of the application, this component is responsible for storing all our content. Drupal 8 core ships with database drivers for Mysql, Postgres and Sqlite. In this demo we are going to use a Mysql backend implementation, on Azure we have 2 options:

  • ClearDB
  • MariaDB Enterprise
  • Install your own Mysql backend

For this demo we are going to use ClearDB, the reasons behind this are:

  • Going back to our principles for this demo, we want to rely on services where possible
  • Quick to provision
  • Simple interface
  • Unlike MariaDB, VM’s are abstracted away behind the ClearDB service on Azure

[embedded content]

As your site grows you should have a discussion about which is a better backend for you. In the following video we are provisioning an Azure ClearDB Mysql backend. Here is some further reading for the database backends:

File Storage

In conjunction with database storage, we also need file storage to make uploaded files available to all our application servers. If only we had a service which we could use as a drop in file storage backend, actually, we do! Azure Files to the rescue. Azure files provides us with a service for mounting scalable backend storage over the SMB 3.0 protocol. To setup Azure Files we need to:

  • Setup an Azure Storage account for our Resource Group
  • Create a file share on the account
  • Mount the storage onto our hosts

The video below demonstrates the setup of a file share, these details will get used later on in the demo.

[embedded content]

aGov image

PreviousNext have done all the hard work for deploying aGov on Azure, we have done this by shipping images to the VM Depot [a][b]( To achieve this, PreviousNext leverage Packer by HashiCorp, and a custom provider from Microsoft:

With these tools combined we can provision and “bake” aGov images for you to leverage in your deployments. If you wish to extend the image that can be done by forking our open source project.

For your convenience we ship 2 flavours of this image:

  • Full stack - The entire LAMP stack, meaning you do not have to worry about provisioning mysql or files backends. This is only intended for aGov demo purposes before moving to a HA stack.
  • HA - This image provides all the same configuration as the above, minus the mysql backend. It also comes with a suite of scripts for assisting with mounting remote storage and connecting to the mysql backend.

For this demo we will only be using the HA image. In the following video we take you behind the scenes and provision a new image which we can add to the VM Depot.

[embedded content]

So how do we deploy our image and wire them to our backend services? Custom Data to the rescue!

Custom Data

Custom Data allows us to inject a script that will be run at the time of provision of each host. Here we have an example script which:

  • Sets up our database connection backend details
  • Mounts the remote file storage

Create a new script with the name with the following contents:


# Changing The Storage Location of the Sync Directory:
setenv AGOV_DIR_CONFIG_SYNC "sites/default/files/config_/sync"
setenv AGOV_HASH_SALT ""
setenv AGOV_DB_HOST ""
setenv AGOV_DB_NAME ""
setenv AGOV_DB_USER ""
setenv AGOV_DB_PASS ""
# Environment variables will take effect now that Apache has been restarted.
sudo /etc/init.d/apache2 restart

# Persistent storage for sharing between hosts.
mount-smb "" "/data/app/sites/default/files" 33 33 0777 0777 "" ""

You will reference this script in the next section, so take note of it’s location. So now we are at the stage where we want to deploy these applications, this is where the Azure CLI comes in.

Azure CLI

We chose to use the Azure CLI for this demo so you could reproduce the same results via drop in scripts. If you still need to install the Azure CLI tools you can install these by the following 2 methods:

Using the Azure CLI and the following script we can provision 3 hosts and wire them to our backends. Here is a breakdown of the variables:

  • IMAGE - An image built by PreviousNext, available on the VM Depot.
  • REGION - The region which our site should reside in.
  • USER - Username for access to the cli on the host
  • PASS - Password for access to the cli on the host
  • GROUP - This is the ID of our resource group which we created above
  • AVAIL - Sets up an availability set with the name provided
  • CUSTOM - Path to the Custom Data, this is loaded and associated with the machine once the command is run

Create a new file called with the following contents:


REGION='Australia East'
# The location of your custom-data script.

azure vm create --connect "$GROUP" -o "$IMAGE" -l "$REGION" "$USER" "$PASS" --availability-set="$AVAIL" --ssh 12345 --custom-data=$CUSTOM
azure vm create --connect "$GROUP" -o "$IMAGE" -l "$REGION" "$USER" "$PASS" --availability-set="$AVAIL" --ssh 12346 --custom-data=$CUSTOM
azure vm create --connect "$GROUP" -o "$IMAGE" -l "$REGION" "$USER" "$PASS" --availability-set="$AVAIL" --ssh 12347 --custom-data=$CUSTOM

When you run this script, you should see three new VMs being created in the output. In this video we are provisioning our hosts with the above script.

[embedded content]


Now that we have provisioned our instances, we need to install the site. Here is an example set of command which will:

  • Log you into the remote host
  • Change to the applications directory
  • Install the site with a Drupal CLI tool called Drush

In contrast to the other scripts, the following are commands which you run on the command line one after the other.

# Connect to one of the hosts on the cluster.
ssh @ -p 12345

# This is where the application is stored.
$ cd /data/app

# Install the application.
$ CMD=”drush site-install agov -y --site-name=’aGov HA demo’ --account-pass=’’ agov_install_additional_options.install=1”
$ sudo -E -u www-data /bin/bash -c "$CMD"

When you run this script, you will see a message showing Drupal was installed.


Endpoints are a networking abstraction which allow us to “poke holes” in the public ip and forward connections through to our VM instances running in the Resource Group.

Endpoints come in 2 types:

  • Single - Forwarding rule for exposing a single hosts port. Good for debugging single instances.
  • Balanced - Forwarding rules to balance traffic across multiple instances. Great for distributing traffic.

Here are some examples of using endpoints via the Azure CLI:

Single endpoint to access the “agov8-ha” host on port 8080

$ azure vm endpoint create agov8-ha 8080 $LOCAL

Balanced endpoint to access the “agov8-ha”, “agov8-ha-2” and “agov8-ha-3” hosts on port 80

 Create a new script with the following contents:



azure vm endpoint create-multiple agov8-ha $RULES
azure vm endpoint create-multiple agov8-ha-2 $RULES
azure vm endpoint create-multiple agov8-ha-3 $RULES

The RULES variable is quite complex, here is a breakdown of the options we have passed into our balanced endpoint.

Option Value public-port 80 local-port 80 protocol tcp idle-timeout 10 direct-server-return probe-protocol tcp probe-port 80 probe-path /robots.txt probe-interval 60 probe-timeout 120 load-balanced-set-name agov8-ha internal-load-balancer-name load-balancer-distribution

When you run this script, you should see the endpoints being created successfully in the output. To verify this has run successfully, run the following command against one of the VMs:

$ azure vm endpoints list agov8-ha

This will return details on the endpoints which have been setup for the “agov8-ha” VM instance.

In this demo we setup a balanced endpoint on our application.

[embedded content]


Congratulations! You have deployed a highly available Drupal 8 application on Microsoft Azure! But, your job is not complete, time to save costs with scaling. HA architectures on Azure are all about getting the 99.95% uptime and to get that SLA we require a minimum of 2 instances. In our current situation let’s assume are getting low traffic and we deployed 3 VM’s, we are losing money! To get our “bang for buck” we want our application to inherit the rules of:

  • Scale up in high traffic situations
  • Scale down in low traffic situations, but a minimum of 2 instances

Azure scaling gives us 2 options applicable for our application, those options are:

  • Time based - Turn off VMs at night, turn on extra VMs during the day (or any times you see fit)
  • Metrics based - If the CPU is greater than X, turn on VM. If CPU is less than Y, turn off a VM.

Notice I phases “turn on” and “turn off”, this is because your maximum and minimum application size is dictated by how many instances you have provisioned. However, don’t fear, you are not being charged for instances which are turned off, you are just saving them for later when you need them, preconfigured and ready to go.

In this demo we will scale our cluster down to 2 instances, while keeping a max size of 3.

[embedded content]

Additional Considerations

Now that we have our application deployed we can start to think about how we manage it on an ongoing basis. Here are the conversations that you need to start having now that you are running a HA architecture. These are topics which warrant their own blog posts, for this post here are some productions as part of the Azure market place which will allow you do get a feel for these implementations:


In this demo blog post we have covered many topics which relate to deploying high availability architecture on Microsoft Azure, but this is not the end. As a follow up to this blog post I strongly recommend you look at how this type of architecture fits into your current Drupal deployments and where some of the gaps might be. If you have any further questions please reach out in the comments and let’s keep the conversation going.

Drupal Drupal8 Azure aGov
Nov 26 2015
Nov 26

There are many articles out there talking about the advantages and disadvantages of one or another cloud provider. But most of them are sponsored, biased and really none of them poinpointing what really matters. They are distracting, focusing your attention away of what they don't want you to know.

You don't want to wake up one day and find that your infraestructure is taking control of your business, and what is worse, that you can do nothing about it.

But preventing that from happening is very easy!

These are two very important recommendations that will come in handy sooner or later if you deal with big - and long term - projects:

  • [1] Reduce friction between on-premises and cloud, ideally making it possible to move from one to another without affecting your processes or your business.
  • [2] Try to make everything you build not to critically depend on any specific software stack.

While reducing friction between cloud and on premises is easy, building your software 100% software stack independant is close to impossible.

Of course, for one shot short term projects, you are better off selling off your customer to a specialized cloud provider so you can move on to the next project ASAP. But for big projects with high ongoing investments, real risk management is crucial for long term success.

If anyone wants to trap you into their spider web, your best weapon is to have clear what your priorities are unless you want to end up fighting yourself free from vendor lock-in features.

Image 1 - Development team making sure that no one makes their same mistake after migrating from PaaS to colocated.

You wake up one day and you find out that you can cut your hosting bill by more than half by moving from PaaS to Colocated or IaaS. It makes sense to move away. But you realize that you have built everything you are around your cloud provider services. That was an early decision that you overlooked, and now it is going to cost you a lot of money. What can make this change so traumatic? Because everything you do depends on a specific set of tools that your cloud provider is giving you and that only this cloud provider offers.

If any of the following are true you have already bit the bait:

  • Your continuous integration procedures depend on tools that are not a market standard, or that you cannot run on premises.
  • Your application is designed to consume vendor cloud-only specific infrastructure services, or you have built everything around technologies that are not available on premises.
  • Your upscaling/downscaling automation is something you do not control or understand, and would not be confortable managing on your own.
  • You think that vanal (I dont mean it is not important) stuff such as monitoring is one of the key features of your cloud provider.

Ideally, you should be able to move from cloud to on premises in a reasonable amount of time and that the shift does not disruptively change the way you do things. 

Next time that you compare the features of a cloud provider do yourself a favor and inmediately discard anything that is provider specific and that will represent a long term tie.

I also deeply recommend to spread services (diversify) as much as possible, and to pick up the service providers that are easier to plug in and out. If your cloud provider is selling you a CDN as well as hosting, usa another CDN. If you ever need to move anything from one provider to another (or from cloud to on permises) it is going to be easier to migrate 1 service instead of 5. Reduce your dependency on a single provider.

We are not going to tear down standard well known Drpual offerings because if any of the above is true, then there is little left to look at with them. And because I do not want to look biased (yes this site is called I will be using Azure to give an example on what things you need to avoid.

Example 1: Azure Document DB

The SQL Server community has been asking for a long time to get JSON support. People are confused as to what is good and in what scenarios regarding RDBM and NOSQL. And MS comes out with a NOSQL storage engine that is only available on azure. Do you know what is going to happen if you build your application around Azure Document DB? That you will never be able to move away from Azure.

Example 2: Blob Storage (Azure, Amazon, etc..)

So it looked like traditional file system approaches were not suited for the cloud, and blob storage was born. But blob storage only exists in the cloud and there is no standard whatsoever as to how you should deal with it. Every provider has it's own API. Instead of consuming your cloud provider specific BLOB API, mount a drive and try to design your application to work on a regular filesystem. You could also use a BLOB API abstraction layer, but still dealing with blob storage will eat away a lot of your time and increase impedance between on-premises and cloud, plus the reality is that blobg storage only makes sense on really huge huge monster applications that have very specific scaling needs.

Example 3: Application specific support

Avoid this like the plague. The only support you must take is for things that are responsibility of your cloud provider, not yours. You want your cloud provider to replace a failing hard drive, but you do not want them to fix your scaling problems on a regular basis. You also do not want them to have all the application-infrastructure binding know how, or else moving away is going to be a tough journey.

Example 4 : Profiling

Zend Z-Ray? Blackfire? I must agree they are great. But is it worth the lock-in? Both of them are things that you cannot run on your own. Blackfire is a cloud service, and Z-Ray can only be installed on Zen Server (this is going to change soon though). For 99% of what profiling is used for, you are perfectly served with XHGUI+UPROFILER, and setting it up takes less than five minutes (or 0 if you make it's deployment part of you automation). Once Z-Ray can be deployed on non Zend Server PHP it will also be a very competitive alternative.

With these examples you have a clear insight on what kind of things you should be looking for before choosing one technology or another to avoid vendor lock-in:

  • (1) Can I run it on-premises?
    • If the answer is no, start looking somewhere else.
  • (2) Is the licensing scheme and pricing for on premises suitable for my project?
    • So you have chosen to build something on top of MSSQL Server, that you can run on-premises and is free (with a 10GB per database limit). But if you ever happen to exceed that quota the on-premises price might be a little steep (at $3,000 per core). But still that is not *that* expensive considering that you can use that license for quite many years.
  • (3) Once (1) and (2) have clearance, now it is time to evaluate features, ease of use and others.

What database engine should you choose? MSSQL, MySQL, MariaDB, Couchbase, Mongo?

Whatever you choose make sure that you are able to switch to another mainstream competitor with 0 application disruption.

It takes a few MSSQL to Oracle migrations to find out that accomplishing this is not as difficult as it looks like, but that it must be done from the begining. Imagine you need to move to Oracle only to find out that the table name limit is of 30 characters. Your application has 100 tables and more than 1,000 columns, and 50% of those columns are more than 30 characters long. That is not fun. You can probably swallow that with ease on a strong typed language. But the effort is many times bigger on loosed type languages such as PHP. And after you are done with that, it turns out that some of the data types you are using are specific to your database engine (such as BIT not being supported on Oracle). You look to God and ask yourself why you picked a BIT instead of an INT, and realize that any argument you held at the time of the decision is vanal and meaningless, and it would have represented little to no difference to development times or application quality.

Build you DAL around an ORM or abstraction layer that ensures database portability to at least MSSQL, MySQL and Oracle. Build your application around one of the big ones, preferably one that encourages good practices such as MSSQL.

Image 2 - Make sure you don't sell your arse to the English for a handulf of PaaS lock-in features

For us, the most cost effective cloud provider that complies with most of the previously exposed requirements is Rackspace because:

  • It is not focused on a specific technology or software stack. Windows or Linux you may have it with first grade support.
  • It has the right infrastructure vision. You can mix cloud with dedicated to best suit your needs. You can even spin up a dedicated server with by the hour billing (that you manage as if it was any of your other cloud servers) thanks to their On Metal offering.
  • It gives you the tools to build your on platform on industry standards, encouraging that you learn what you do resulting in 0 friction when moving to another provider, dedicated or on premises.
  • They give you specialized support - but for infrastructure design - not for your specific application.

But Rackspace is way too expensive - yet suitable for projects that demand strong certification such as SAS70. If you want something cheaper and still extremely good I personaly recommend Vultr.

PaaS providers are on a price and feature war. You must stay away from it.

Image 3 - The mohicans already knew that wars have no winners and no loosers


Nov 26 2015
Nov 26

The #d8rules team is excited to welcome Acquia to our list of supporters. With their generous support of fully funding milestone 2, fago & klausi can plan dedicated time over the next months to focus on getting the MVP of Rules for Drupal 8 done.

Since finishing milestone 1 and DrupalCon Barcelona, we are basically in a developer preview state. The basic APIs of Rules 8 are pretty stable already, enabling contributed module porters starting work on their integrations.

Milestone 2

Milestone 2 now is all about getting a useable product to developers & end users of Drupal 8.

See what's planned for M2:

  • Completing Rules engine features (Metadata assertions, logging service)

  • Rules plug-ins part two (Events, Loops, caching,  components API)

  • Configuration management system support (config entity, CMI support, integrity checks & config schema)

  • Generic rules integrations (Typed data & entity support)

  • Entity token support

  • Basically usable UI (Nothing fancy yet)
  • Basic API for embedding condition and action forms

The estimated, remaining 316 hours for M2 are fully funded by Acquia, drunomics and epiqo. Acquia is putting € 14.220,- in to help us work continuously over the next months. drunomics & epiqo are providing 50 % by a lowered rate of € 45 for fago and klausi to work on #d8rules during office hours.

We are expecting a release for M2 for beginning of March, 2016. This should allow the 25 % Drupal 7 sites which use the Rules module to start building for Drupal 8. Of course, we are also looking forward to see new adapters making use of flexible, UI-driven workflows.

Thanks again for everybody helping speeding up our work of porting Rules to Drupal 8. If you'd like to help out getting funding secured for Milestone 3, let's get in contact.

d8rules M2 thumbs up

Nov 26 2015
Nov 26

So.... Drupal 8 got released! Congrats everybody! The end of life of Drupal 6 is then final. In addition, the 16th of november it was announced that is now serving its content and files via Fastly; which is giving a significant performance boost, well done!

Furthermore, what I noticed last month on module updates:

1) Scroll to destination anchors

This module modifies the behaviour of an ‘anchor’ within a page. So the page will not jump down but fluently scroll down. We have installed this module here.

2) Spider Slap

There are a lot of ‘evil spiders’ active on the internet. These are web crawlers that don’t respect what is written in your robots.txt. This can cause unnecessary load on your server and uncover information that you don’t want to see in a search engine.
This module is solving this problem. It will block the IP when a spider does not behave, with as a result that it will no longer have access.

3) Bounce Convert

Do you want to make an announcement in the last moment before a visitor is closing your Drupal website? Then this module can be useful. It functions like Exit monitor or Bounce Exchange.

Introduction video.

!) Note that it is currently an alpha module, so not yet suitable for live Drupal sites.

4) Database Email Encryption

Do you want to maximise the security of the email addresses of your registered users? This is possible with this module. It encrypts the addresses in the database. Should the database end up in the wrong hands, then the email addresses cannot be read. Encryption is done using AES.

5) Unique field

A popular module existing since Drupal 5, but I never noticed it before. It is performing a check on entered fields (e.g. title field) and checks whether the entered title is unique. This will prevent the use of double titles which is good for, among others, SEO.

6) Login History

By default Drupal is not creating a login archive. This module will do this for you: it creates an archive in which the history of logins will be stored.

7) Sitemap

Generates a sitemap for your Drupal 8 website and can also create RSS feeds for example for your blog. This is the Drupal 8 version for the popular Drupal 7 module Sitemap.

8) D8 Editor File Upload

Easily place files in content. This Drupal 8 module adds a new button to the editor, which will make it easy to upload and place files.

9) Client side Validation

Validating a form in your Drupal website without refreshing the page. This widely used module now offers a Release Candidate for Drupal 8.

10) App Link

Probably you recognize this one: the message above a website on your Smartphone that you can view the page in a native app. If you built and linked an app (e.g. via DrupalGap) then you can generate this ‘app message’ on your Drupal website using this module.

11) OpenLucius News

An own module that has to be mentioned;). This module extends Drupal social intranet OpenLucius with a ‘news tab’ on the homepage. News about the organization can be placed here.

12) Simple XML sitemap

The title of this Drupal 8 module says it all: it provides an XML sitemap that you can upload to search engines, so you can see the indexation of all your site links in the relevant search engine.
The module also has a few configuration options like setting ‘priority’.

13) Session Limit

Tighten the security of your Drupal system by limiting the number of sessions with which a user is logged in. You can for example set that somebody can be logged in once; if somebody is logging in on his/her Smartphone then he/she will be automatically logged out on the work computer.

14) Login Security

Provide additional security when logging in, it is for example possible to:

  • Set how many times a user can attempt to log in before the account is blocked.
  • Deny access based on IP, temporarily or permanently.

The module can also send emails (or a log to Nagios) to alert the Drupal administrator that something is going on:

  • It seems that passwords and accounts are guessed.
  • bruteforce attacks or other inappropriate behaviour when logging in.

15) OpenLucius LDAP

Another own module, that should be mentioned as well ;-). This module extends Drupal social intranet OpenLucius with a LDAP connection, so that users can login to OpenLucius with their existing LDAP account.

16) Protected node

Gives additional protection to a certain page (node). A password can be set when creating the node. If somebody then wants to look at the node, the password must be entered to get access.

17) Code per Node

It is common to ‘deploy’ Drupal code (PHP, JS, CSS) via GIT within an OTAP street to a live Drupal server. Usually with the use of a Continuous Integration tool.

But with this module you can perform quick fixes per page without the whole operation. It offers the opportunity to add additional CSS; per node, content type, block or global.

Not how we would do it, but I can imagine that this could be a handy tool for Drupal site builders. This is probably also the reason why it is so popular.

18) Admin Toolbar

A handy toolbar for Drupal 8

Wrap up

Alright, these are the modules for this month. In December we will introduce again new ‘cool Drupal modules’, so stay tuned!

Nov 25 2015
Nov 25

Post date: 

November 26 2015




drupal planet, drupalcampmelbourne, drupal

This is the second year that DrupalCampMelbourne has been run in it’s current form, and it’s expected to be just as much fun as it was last year, but maybe just a bit bigger.

DrupalCampMelbourne is a two day event, with one day of sessions and one day of code sprints, but the way it’s run is a little bit unique (as far as we know). Unlike a conventional Conference or Camp, the scheduling is 100% determined by the attendees on the day.

How does that work you ask?

It’s relatively simple:

  1. First thing Friday morning all attendees get the opportunity to do a short lightning talk explaining the topic they wish to cover.
  2. During the lightning talks, all attendees will vote on the sessions they wish to see.
  3. Finally, after all lightning talks and voting is complete, the DrupalCampMelbourne website auto-magically builds the schedule based off the votes, number of sessions and room sizes.

We ran this approach for the first time last year and it worked superbly, and with a little tweaking to the algorithm this year we expect it to be just as good, if not better.

The major benefits of this approach are:

  1. Everyone gets an opportunity to have their say, both in submitting a session and in voting on what they’d like to see.
  2. No “committee” or “track chairs” are required to vet every talk and make the final decisions, reducing the organisational time of the event.
  3. SkyNet is one step closer to taking hold of us all… oh wait.

So if you are coming (you are coming right?), make sure to get their early and have your say. And remember, everyone has something worth saying and worth hearing, and there’s nowhere better to start than a local community.

There are still some tickets left for the event, so if you haven’t got yours, get it now:

The future (a.k.a, SkyNet?)

The auto-magic scheduling of the talks is but the beginning, just as the day of session is just the beginning of DrupalCampMelbourne.

Day 2 of DrupalCampMelbourne is, as it was last year, a Code Sprint. This year, I will be running a sprint on the future of the DrupalCampMelbourne website in the hopes to make it even better; more autonomous, more usable and also more generic.

More autonomous

The “auto-magic” scheduling feature is a great help for running a DrupalCamp, it helps get Day 1 all sorted with minimal effort, but it’s not the only part that can be automated and improved. A larger portion of the camp itself could be automated.

If, when setting up the next camp, one where to provide the site with the date of the event, the camp could set a schedule for the organisers (when to have venue booked by, when to contact sponsors, etc), it could transition through various states (register your interest, event information, signup, etc), it could manage the budget (venue cost + resource costs - sponsors - tickets = success) and much more.

The possibilities are endless.

More usable

There’s no question that there have been some issues, certain information lacking, not enough communication, and other various management related issues; this is inevitable when the number of volunteers is in the low single digits and the time those volunteers have is equally lacking.

DrupalCampMelbourne is as open source as it can be at the moment, the source code is entirely available on ( for anyone at all to contribute to. This year I want to push forward and get more people involved, let’s ensure the site is more usable in the future, makes more information available and the site provides the missing communication it needs.

More generic

There is absolutely no reason that this project should be specific to DrupalCampMelbourne, nor even a DrupalCamp at all, it could apply to any type of Camp style event in any locale.

Genericising the existing work and building a new DrupalCamp/Camp Drupal distribution has been a goal from the very start, and with Drupal8 out it’s the best time to do exactly that.

So come along to DrupalCampMelbourne 2015 on Saturday (and Friday) and get involved. This is only one of the various sprints that will be being run during the Code Sprint. And don’t forget, a Code Sprint isn’t just for developers, there’s something for everyone, from novice to professional.

Nov 25 2015
Nov 25

As with many crazy ideas this all started in a bar. No ordinary bar, this was DrupalCon Barcelona Trivia Night. @BarisW said “so what are you going to do about Drupal 8’s release, we’re relying on you”. I was phased, but it set my mind in motion.

I am one of those people who, when faced with a challenge, believe the impossible is possible. How do I reach every corner of the globe and spread the word of this amazing new Drupal 8 thing? This is the story of how Celebr8D8 came to be, and I didn't do it alone.

Why was so driven to this?

Simple. Whilst I have been very fortunate to have attended many DrupalCons, experienced the scale and diversity of the Drupal community first hand, many have not. Many never will. Few have any concept of the global movement which builds the software. It all happens behind closed doors. I wanted to show to the world there are people in every place across the globe involved in Drupal. Of every creed, colour and background.

Germinating the ideas

So I decided it would be a interesting project to try and persuade as many people as I could muster, from as broad backgrounds as I could get, to film themselves saying a script about Drupal 8. This could then be edited to feature each person in sequence. The inspiration came from these two pop videos. Cry and Band Aid.

Like most big ideas in Drupal this was going to need help. Enter Jeffrey A. "jam" McGuire, Robert Douglass and Campbell Vertesi. A few emails later we had a concept. Then out the blue Campbell suggested Celebr8D8. BOOM! I instantly knew this was the stroke of genius upon which to hand our idea from. Jam and I had skype chats, created obligatory google docs, sent dozens of persuasive emails. In less than 2 days between us we had a concept, script and list of 20+ people willing to take part.

Then out the blue Campbell suggested Celebr8D8. BOOM! I instantly knew this was the stroke of genius

But then I thought, wouldn’t it be amazing to make a Drupal 8 site for this film to live on. And how about we secretly approach dozens of people to film themselves talking about what Drupal 8 meant to them. Using Jam’s and my little black books we emailed as many people as we could think with sample videos inviting people to join in.

Meanwhile I twisted a few arms - Amy Leak (Designer), Matt Smith (Developer), Alison Hover (Themer), James Hall (Site Builder) all committed to creating the Drupal 8 site. They were amazing, literally a self managed team who made magic.

It was all coming together, or so I thought. Time was passing and I was becoming anxious. As social media lead for Drupal, I was one of the few people who knew down to the hour when Drupal 8 was coming out. 4 days to go and no crowd sourced videos. I stayed up until the early hours emailing people, persuading them to commit. Slowly the films started to trickle in, then a wave.

Meanwhile, the feature film submissions were coming in thick and fast. Enter Graham Brown (@vaccineMedia), producer of the film. Working into the early hours for several days with little direction he created the film we all saw watched on release day. And then Jam told us he’d had a call to go to Antwerp to meet Dries, with a film crew (thanks Acquia!). Suddenly we an exclusive, the big guy was in. Sometimes I think destiny has a part to play in life, this was one of those days.

And then Jam told us he’d had a call to go to Antwerp to meet Dries, with a film crew (thanks Acquia!). Suddenly we an exclusive, the big guy was in

But how do we get people to know about this site?

So we had an amazing film coming together, a site and dozens of community films flying in from around the globe. But how do we draw an audience? Well if there’s one I’ve learnt in Drupal, if you can inspire the community into action amazing things happen. So I set about enabling another mad idea I had - “The social media Mexican wave”.

I thought that if I could contact a person in every country where there is Drupal and ask them to retweet one tweet about our site, at a certain time in their local timezone, we could achieve something interesting. Chasing the sun around the globe, a steady rhythm of retweets would ripple across the world like the Mexican Waves in 1990’s football crowds.

[embedded content]

So I created a webform for people to volunteer, sent a couple of tweets linking to the form and went to bed. In the morning I had a tonne of retweets and more importantly 199 volunteers with a potential reach of 350000 people! Wow! So I emailed the volunteers and primed them with my plan so that come Drupal 8 release day they could support us on social media, give us a boost!

Another secret piece to my social media master plan was to ask people Twitter username when they submitted films. Whilst moderating each film, I took a screenshot, added it to a scheduled tweet in Hootsuite with a link to the page on our site, and @mentioned the person in the tweet. Doing so would guarantee that person noticed, hopefully was flattered and duly retweeted. Each tweet was carefully primed to go out at the right time, when that person was awake, in their part of the world. Naturally most people retweeted and boosted our impact. (an export of these tweets is available to download below, with stats so you can copy my ideas)

The hashtag #Celebr8D8 was the icing on the cake. The Drupal Association contacted me and we asking what hashtag could be used for the 207 release parties happening around the world. They loved the one we had planned and asked if they could use it too. Well no one owns hashtags, I thought it would be fun and more effective to combine efforts.

So come 9am 19th November 2015 the first tweet announcing the site went out as I slept. Thanks Hootsuite (and many others to follow that day). In New Zealand Josh Waihi’s short film launched the site with a humble tweet, and lit the touch paper of what became a 24 hr period which saw our campaign reach 250,000+ people. The Drupal community really got behind the idea, our films were watched thousands of times, a huge feeling of being connected was achieved.

We were very fortunate to have the full power of, provided for FREE (thanks Robert Douglass!), so we knew that no matter how busy the site got, it would stay up. I’m sure Platform hardly noticed, but I was pretty stoked when I noticed there were 386 visitors on the site at one moment.

Was it a success?

What started as an idea by one person and a few friends took flight and it felt like the whole world joined us. People from 115 countries in the world came to the site, watched our films. For the 7 hours between 10:00 and 17:00 GMT the site sustained over 200 concurrent users. We supported 7,396 sessions by 5,474 users and 22,061 page views. Not bad for an entirely volunteer team who had less than 2 weeks notice.


222.5K impressions in 3 days. The main announcement Tweet reached 48,042 people with 309 retweets.

We asked people what #Drupal8 means to them. Their response will delight you #celebr8d8

— Celebrate Drupal 8 (@celebr8d8) November 18, 2015

Celebr8 Drupal 8 the Film

Graham's headline film was watched 1596 times in 24 hours.

[embedded content]

Who's were the most popular films

I know how the Drupal community are so competitive, so these are in popularity order.

  1. Dries and Jam Belgium and NZ
  2. MortendK Denmark
  3. Drupal Association USA
  4. StavrianaNathan
  5. Grienauer Austria
  6. Noah Australia
  7. Net Studio Greece
  8. Lewis and Emma UK
  9. Andrew McPhaerson UK
  10. Amazee Switzerland
  11. @shyam_raj
  12. Steve Purkiss UK
  13. Dave Hall Australia

Stats available below

You will see below more remarkable stats I’ve taken from Google Analytics, Twitter Analytics and Bitly. In the spirit of open source, there is also a spreadsheet with some of the top level stats and some PDF’s you may use and distribute freely under Creative Commons Attribution-ShareAlike 3.0 license

All of this makes me very happy. But it would not have been possible without some very special people. I’d like to close by saying a huge thanks to Jeffrey A. "jam" McGuire and his lovely wife Francesca who tolerated me hijacking him for best part of two weeks to pull help pull my mad plan together. Without Jam this would not have been possible, nor as amazing. And to Graham, Matt, Ali, James and Amy for tirelessly working on the film and site!

Let’s do it again some time, but for now, can I have a rest?

In the meantime here are few of my favourite tweets from this very memorable day .....

Following are some of my favourite tweets

With so many to choose from, here are a selection from the tweets I favourited on Twitter.

Drupal Saudi Arabia

????? ????? ????????????? #?????? #?????? #Celebr8D8 #Drupal8 #drupal ??? ???@Raeda1 @DrupalAssoc @celebr8d8 @drupal

— Essam Al-Qaie (@EssamAlQaie) November 19, 2015

Cerebr8ting Drupal8 in Munich! #Celebr8D8

— Maria Blum (@BlumCodes) November 19, 2015

Join us next Thursday to #Celebr8D8 — a piñata is in the forecast —

— Amazee Labs Austin (@amazeelabs_atx) November 12, 2015

The Moment We've All Been Waiting For: Drupal 8 is Here! #Celebr8D8!

— Duo (@DuoConsulting) November 19, 2015

#celebr8d8 up up up. @drupalcamppune celebrating drupal 8 launch. Sky lanterns. #drupal

— Prafful Nagwani (@nagwaniz) November 19, 2015

Drupal Nigeria

A little #Drupal8 release party happened last Friday, Eket, Nigeria. #Celebr8D8. Awesome!!

— Aniebiet Udoh (@almaudoh) November 23, 2015

Drupal Bangalore

#celebr8d8 Bangalore Drupal 8 celebrations !

— pvishnuvijayan (@pvishnuvijayan) November 21, 2015

Welcoming the new baby in gang #Celebr8D8 #Drupal8 ##drupal

— AddWebSolution (@AddWebSolution) November 21, 2015

and the party is still on ;) Celebrating @Drupal8ishere :D Are you celebrating? #Celebr8D8 #Drupal8 #Cyprus

— OpiumWorks (@OpiumWorks) November 20, 2015

Looking for the #Celebr8D8 party in Melbourne? It’s near these people!

— cafuego (@cafuego) November 20, 2015

Setting up for the Portland #Celebr8D8 shin dig!

— Holly Ross (@drupalhross) November 20, 2015

Getting ready to party in Portland with @ryanaslett #celebr8d8

— Drupal Association (@DrupalAssoc) November 19, 2015

drupal 8 release party @Blisstering !!!! @DrupalMumbai @celebr8d8 @DrupalAssoc

— Blisstering (@Blisstering) November 19, 2015

Consider this the start of a slow clap for all those #Celebr8D8 cakes. Delicious work, #Drupal, delicious work.

— Amazee Labs Austin (@amazeelabs_atx) November 19, 2015

#Celebr8D8 coming to an end. Thanks @pdjohnson

— Isabell Schulz (@murgeys) November 19, 2015

Nov 25 2015
Nov 25

Load Testing is an important part of quality assurance that takes place prior to launching a site.

When load testing, we simulate user interaction with a website, increase the frequency or the number of interactions and collect the results of system usage, then analyze them to aid system improvement towards desired results. The data will prove useful for creating benchmarks of site performance, which can be compared with earlier site's performance if a site is undergoing a migration. 

Read more about Drupal site migrations.

The Basics

There are three phases of performing load testing.

1. Analysis & Acceptance: Obtain a prediction of potential load and agree on Acceptance Criteria

2. Behavior: Plan and design the test cases and test steps that represent typical usage and prepare the target environment

3. Execute, Modify & Repeat: Execute the test and measure the results. If necessary, implement improvements and repeat.

Analysis and Acceptance

Typically we analyze previous site’s traffic by looking at google analytics during site migrations. We look for traffic patterns and interaction patterns. Looking for highest pageviews days over a period of a year or more gives us an idea of typical peak load. Sometimes the trends are seasonal, as in eCommerce sites, so it’s important to look at a longer period of time.

Interaction patterns give us an idea on how the users use the site; whether they are logging in, which pages are most popular, how many pageviews are typically generated by the user, what actions are performed, etc.. From this data we will create target load numbers and use cases. We agree on the acceptance criteria of site performance. Those may be the number of error-free requests a minute, Apex scores or average response times. Looking at lengthy periods of time in Analytics will help set accurate expectations and test cases??


Build test cases and test steps that will be run by the load test. Sites that provide information typically have scripts that simulate several users consuming information on some of its popular pages. Applications will have scripts which run the user through achieving a goal, such as signing up for an email, creating an account, searching for specific content, etc..


We prefer to use Load Storm for test execution along with New Relic on the server. Load Storm spins up cloud servers which simulate user behavior through browser requests and collects the data on each request while providing a graphical representation of the results.

Using Load Storm in conjunction with New Relic, we are able to see performance patterns. We look for average response times, peak response times, slow responding requests, error rates and try to find a point of failure.

New Relic gives us the ability to tune the site by giving us insights into database performance, which modules or transactions take the most time to complete and which parts of the application have the heaviest usage. This gives us visibility into where the limitations are and where to focus our efforts.

Load Storm is the preferred testing tool of Promet Source?

At the conclusion of load testing the application performs better and everyone has peace of mind that, once the site goes live, it will be a pleasant experience for the users and the technical staff supporting the site.

Video of Load Storm Demo on Drupal Commerce sites:

[embedded content]

Nov 25 2015
Nov 25

Last week, we released Drupal 8.0.0! This amazing milestone signals a new era in Drupal core development, with changes to the changes allowed in patches and a new release cycle with minor versions scheduled every six months.

Now that Drupal 8 is ready for building real sites, with contributed projects already available or being built, the immediate focus for Drupal 8 core will be fixing bugs to help those real sites, as well as fixing any issues in Drupal core that prevent contributed modules from being ported to Drupal 8. Another top priority is stabilizing and completing the core Migrate feature and its user interface, so that existing Drupal 7 and especially Drupal 6 sites can move to Drupal 8 reliably. Finally, a third priority is adding frontend and performance testing to help us make changes more safely. For the next six weeks, we will mainly be committing patches that move us toward these three goals.

Then, after January 6, 2016, we will begin a broader feature development phase for innovations in 8.1.x, 8.2.x, and beyond, so long as we are able to resolve critical issues consistently and keep 8.1.x stable for a reliable scheduled minor release. Read more about the proposed development, beta, and release candidate phases for minor versions.

Drupal 8 core branches and the core issue workflow

Starting today, all patches that are accepted to core according to the priorities above will be committed first to the 8.1.x git branch (even when they are filed against 8.0.x-dev in the core issue queue). Patches that are eligible for patch releases will typically be immediately committed to 8.0.x as well. If we are close to a bugfix release window, the issue may be marked "Patch (to be ported)" and committed just after the bugfix release, to give ample time to catch any regressions or followups before each change is deployed to production sites.

Some patches will only be committed to 8.1.x (for example, if they are too disruptive for a patch release or if they make additions that are only allowed in minor releases). Keep in mind that the open feature development phase for 8.1.x has not started yet, so plan to work on general feature additions and BC-compatible API improvements after it does.

Note that for the time being, patch-release-eligible issues are still filed against 8.0.x-dev in the core issue queue and most 8.1.x issues are still postponed pending the open feature development phase. Later, we will update the core issue metadata and processes as we move into more extensive minor version development.

Nov 24 2015
Nov 24

One of the most exciting aspects of preparing for a DrupalCon is selecting the sessions that will be presented. It’s always incredibly cool and humbling to see all the great ideas that our community comes up with— and they’re all so great that making the official selections is definitely not an easy process! This time, the Track Chairs had almost 350 sessions to read through to determine which 50 would be presented in Mumbai. A lot of hours go into reading each proposal and in the end, they had to make some hard decisions, but are very excited about the programming we will be offering at the Con.

After all was said and done, we are looking forward to presenting 50 sessions by 72 unique speakers. These speakers are comprised of 86% males and 14% females. Coming from all over the world, we are happy to announce that 52% of the speakers will be attending from the South Asia region and will presenting alongside the other 48% of speakers who come from other global Drupal communities.

We look forward to announcing the schedule of sessions by 14 December for you to start planning your days in Mumbai!

See the Selected Sessions

See you in Mumbai!

Nov 24 2015
Nov 24

Photo Drupal has a pretty secure structure: a small, simple and stable core that can be extended with tons of modules and themes. From Drupal 7’s initial release on January 5, 2011 until now, there were only 17 core security updates, which is quite a small number for a period lasting longer than four years.

But when it comes to third-party modules and themes, the picture is quite different. Although only modules with official releases are reviewed by the security team, or have security announcements issued, the majority of the 11,000+ third-party modules and themes for Drupal 7 get weekly reports for security issues.

And using custom modules is even more dangerous if they are not tested properly. Let’s face it: no one uses Drupal without modules. That’s why I will share with you some of the best open source tools to improve the security of your website.

Knowing your opponent’s moves helps you better prepare your defenses. That’s why we will try to attack with every known-at-the-moment method of testing vulnerability. All the tools I will show are easy to use without any knowledge of the source code. And the best part is, you can use this strategy indefinitely, if you keep these tools up-to-date. Remember: update first, then test.

Being Up-to-Date

I can’t emphasize enough how important it is to keep all your stuff up-to-date, so let’s start with that idea: If one tiny part of your website has a security breach, the whole system is corrupted. That’s why you should check for updates for the core and the modules you are using. There are reports you can find on Drupal’s official page; if you find that there is a security update available, immediately apply it.

Metasploit + Armitage = Hail Mary!

Start with Kali Linux: it's small, and has Metasploit and Armitage pre-installed. Armitage gives you a GUI, exploit recommendations, and use of the advanced features of Metasploit Framework's Meterpreter. (But remember to get updates every time you're about to run tests.)

Then, get an exact clone of the server; same machine, database, structure, OS version, etc.

NOTE: It is not recommended you use this technique on live websites because there is a chance the server will go down.

Now you’re ready to put on the white hat and get the party started.

  1. Do a scan. Nmap Scan is integrated into Armitage. However, I recommend using it outside Armitage since you can configure the scan parameters better. There are a lot of different options to choose from. I use the GUI version Zenmap – which also comes preinstalled on Kali Linux – and the following command:
    nmap -sS -p 1-65535 -T4 -A -v
    • sS: Stealth SYN scan
    • p 1-65536: All ports
    • T4: Prohibits the dynamic scan delay from exceeding 10 ms for TCP ports
    • A: Enable OS detection, version detection, script scanning, and traceroute
    • v: Increase verbosity level
  2. After you scan, save the file (scan.xml).
  3. Add host: From the navigation menu “Hosts” -> “Import Hosts” and choose scan.xml.
  4. From the navigation menu, choose “Attacks” -> “Find Attacks”.
  5. From the navigation menu, choose “Attacks” -> “Hail Mary”.

PhotoHail Mary finds exploits relevant to your targets, filters the exploits using known information, and then sorts them into an optimal order.

Important: When you use msfupdate, the database doesn’t get all the possible exploits. When you find some exploit that you want to try on your site, you have to manually add it and execute it. Here’s how:

  1. Download the exploit from exploit-db or write a script on your own.
  2. Put it in the ~/.msf4/modules/exploit/<your_folder> directory. Any exploit put here will be detected by Metasploit when it starts.
  3. Execute it with: use /exploit/your_folder/exploit_name.


Wapiti is a powerful scanner. It supports a variety of attacks and, in the end, provides nice reports in different formats. You can read more about it on the official site. When you open the console and type in wapiti, the wapiti help will load. I use
wapiti <a href=""></a> -n 10 -b folder -u -v 1 -f html -o /tmp/scan_report.

  • n: Define a limit of URLs to read with the same pattern, to prevent endless loops, here, limit must be greater than 0.
  • b: Set the scope of the scan; analyze all the links to the pages which are in the same domain as the URL passed.
  • u: Use color to highlight vulnerable parameters in output.
  • v: Define verbosity level; print each URL.
  • f: Define report type; choose HTML format.
  • o: Define report destination; in our case, it must be a directory because we chose HTML format.

NOTE: It is possible that you’ll encounter “You have an outdated version of python-requests. Please upgrade.” The fix is pip install requests –upgrade.


So, CMSmap is another free open source vulnerability scanner which supports Wordpress, Joomla, and Drupal. It also supports brute force, but Drupal is solid there since it blocks the user after a fifth wrong password attempt.

CMSmap is not preinstalled in Kali, so you’ll have to download it: git clone <a href="">[/geshifilter-code]
To run the tool, type:

cd CMSmap/

I use the following configuration command:
./ -t <a href=""></a> -f D -F -o CMSmap_example_results.txt

  • t: Target URL.
  • f D: Force scan for Drupal.
  • F: Full scan using large plugin lists.
  • o: Save output in file.

That’s all, folks.

But remember: “The quieter you become, the more you are able to hear.”

Image: "Security" by Henri Bergius is licensed under CC BY-SA 2.0

Nov 24 2015
Nov 24

If you are still using the same Nginx configuration that you have been for Drupal 7 on your new Drupal 8 install, most things will continue to work; however, depending on the exact directives and expressions you are using, you might notice a few operational problems here and there that cause some minor difficulties.  The good news is that the Drupal configuration recipe in the Nginx documentation has been updated to work with Drupal 8, so if you have a very basic Nginx setup, you can just grab that and you’ll be good to go.  If your configuration file is a little complicated, and you do not want to just start over, the advice presented below might be helpful when fixing up any defects you may have inadvertently inherited.

Here are three signs that your Nginx configuration needs some fine-tuning to work with Drupal 8.

Can’t Run Update.php

In Drupal 8, the update.php script now displays an instructional dialog with a “Continue” button. Clicking “Continue” will bring the user to the URL update.php/selection, which runs the actual database update operation.

Expected Result:

Clicking the “Continue” button from the update.php script should run the update operation, and then display another dialog for the user.


Clicking the “Continue” button from the update.php script will bring the user to the URL update.php/selection, but the user is presented with a “Page not found” error, and the update operation does not run.


The URL update.php/selection is a little unusual, being a little bit like a clean URL, and a little bit like a mixed php-script with query parameters, except that /selection is used instead of query parameters. Some of the Nginx configuration examples were not written with these patterns in mind, so some location directives will fail to match them.

Nginx Configuration Fix:

Confirm that your location directives are not written to require that the .php extension appear on the end of the URL.

Note: The suggested URL is fairly strict, and only allows the unusual Drupal-8-style paths for the update.php front controller. Being very strict about which paths are matched allows us to continue route paths such as blog/index.php/title through Drupal, for sites that may need to maintain legacy URLs from a previous implementation. If your site does not need to route URLs that contain .php, then you might prefer to use a laxer location rule, such as:

          location ~ \.php(/|$) {

The benefit of using a less restrictive rule is that you will not need to update your Nginx configuration in the future, should a new version of Drupal start using this style of URL with front controlers other than update.php.

Can’t Install Modules from the Admin Interface

Drupal 7 introduced the feature that allows site administrators to install new modules from the Admin interface, simply by supplying the URL to a module download link, or by uploading a module using their web browser. This process relies on a script that determines whether the user has access rights to install modules. Under Drupal 8, this script is located at core/authorize.php; however, a bug in Drupal results in the URL core/authorize.php/core/authorize.php being used instead.

Expected Result:

When working correctly, installing a module through the admin interface will bring up a progress bar that installs the module onto the site.


If Nginx is not configured correctly for Drupal 8, then instead of the progress dialog, the user will see an Ajax error dialog that reports that core/authorize.php/core/authorize.php could not be found.


When the URL core/authorize.php/core/authorize.php  is accessed on an Apache web server, it will find the authorize.php script in the correct relative location once it processes the first part of the path; the second core/authorize.php is then passed into the script as the SCRIPT_PATH, which Drupal ignores. Using some recommended configuration settings will cause Nginx to attempt to process the entirety of core/authorize.php/core/authorize.php as a single route, which it will not be able to find, causing the error.

Nginx Configuration Fix:

Change the regular expression used to partition the SCRIPT_PATH and PATH_INFO to use a non-greedy wildcard match, so that only the portion of the URL up to the first authorize.php will be included in the SCRIPT_PATH.

By default, Nginx uses a greedy match.  +? is the non-greedy variant.

Some Navigation Elements are Missing CSS Styling

Drupal 8 uses Javascript to apply the class is-active to navigation elements on the that correspond to the current page that is being viewed.  This allows themers to apply CSS styles to highlight the active menu items, and so on.

Expected Result:

The styling used will vary based on the theme being used. In Classy, the default theme, if you move the main navigation menu to a side bar, then the menu item link that corresponds to the current page should be colored black instead of blue.


If your Nginx configuration is not correct, then the active menu item link will be styled exactly the same as all of the other menu items.


Drupal includes attributes in the navigation page elements that indicate which pages those elements should be considered to be active. Drupal’s Javascript in turn builds a selection expression based on the current path and query string from the current page URL. A misconfigured Nginx configuration file can cause the query string to be altered in such a way as to prevent the selection expression from matching the page elements that should have the is-active class added.

Nginx Configuration Fix:

This is less likely to be encountered, because the configuration that works with Drupal 8 is the same as the one that is also recommended for Drupal 7. You will only have trouble if you use the older configuration that was recommended for Drupal 6, or some variant thereof. Of course, there is a very large variation in the kinds of configuration files that can be created with Nginx, and not all of these will look exactly like the examples shown above. Hopefully, though, these explanations will go a long way towards explaining how to correct the configuration directives you have, should you encounter any problems similar to these.


I am endebted to Damien Tournoud and Michelle Krejci, who were instrumental in analyzing these configuration issues. I would also like to thank Jingsheng Wang, who first published the update.php fix.

Topics Development, Drupal Planet, Drupal
Nov 24 2015
Nov 24

Cache clearing nirvana may be two vsets away

tl;dr If your D7 site uses features or has many entity types, some recent patches to the features module and the entity api module may deliver dramatic performance increases when you clear Drupal's cache. The magic:

    $ drush vset features_rebuild_on_flush FALSE
    $ drush vset entity_rebuild_on_flush FALSE

The Backstory

Given that tedbow is a good friend in our little slice of paradise, aka Ithaca, NY, we decided that we were going to embrace the entityform module on the large Drupal migration I was hired to lead. Fifty-eight entityforms and 420 fields later (even with diligent field re-use), we now see how, in some cases, a pseudo-field system has real benefits, even if it's not the most future-proof solution. As our cache clears became slower and slower (at times taking nearly 10 minutes for a teammate with an older computer), I began to suspect that entityform and/or our extensive reliance on the Drupal field system might be a culprit. Two other corroborating data points were the length of time that feature reverts took when they involved entityforms. Even deployments became a hassle because we had to carefully time them if they required the cache to be cleared, which would make the site unresponsive for logged-in users and cache-cold pages for 5 minutes or more. Clearly, something needed to be done.


I'm sure there are better ways to handle performance diagnostics (using xDebug, for example), but given the procedural nature of drupal_flush_all_caches it seemed like the devel module would work just fine. I modified the code in Drupal's file to include the following:

function time_elapsed($comment,$force=FALSE) {
  static $time_elapsed_last = null;
  static $time_elapsed_start = null;

  $unit="s"; $scale=1000000; // output in seconds
  $now = microtime(true);
  if ($time_elapsed_last != null) {
    $elapsed = round(($now - $time_elapsed_last)*1000000)/$scale;
    $total_time = round(($now - $time_elapsed_start)*1000000)/$scale;
    $msg = "$comment: Time elapsed: $elapsed $unit,";
    $msg .= " total time: $total_time $unit";
  else {
  $time_elapsed_last = $now;
 * Flushes all cached data on the site.
 * Empties cache tables, rebuilds the menu cache and theme registries, and
 * invokes a hook so that other modules' cache data can be cleared as well.
function drupal_flush_all_caches(){
  // Change query-strings on css/js files to enforce reload for all users.

  // Rebuild the theme data. Note that the module data is rebuilt above, as
  // part of registry_rebuild().
  // node_menu() defines menu items based on node types so it needs to come
  // after node types are rebuilt.

  // Synchronize to catch any actions that were added or removed.

  // Don't clear cache_form - in-progress form submissions may break.
  // Ordered so clearing the page cache will always be the last action.
  $core = array('cache', 'cache_path', 'cache_filter', 'cache_bootstrap', 'cache_page');
  $cache_tables = array_merge(module_invoke_all('flush_caches'), $core);
  foreach ($cache_tables as $table) {
    time_elapsed("clearing $table");
    cache_clear_all('*', $table, TRUE);

  // Rebuild the bootstrap module list. We do this here so that developers
  // can get new hook_boot() implementations registered without having to
  // write a hook_update_N() function.

The next time I cleared cache (using admin_menu, since I wanted the dpm messages available), I saw the following:

registry_rebuild: Time elapsed: 0.003464 s, total time: 0.003464 s

drupal_clear_css_cache: Time elapsed: 3.556191 s, total time: 3.559655 s

drupal_clear_js_cache: Time elapsed: 0.001589 s, total time: 3.561244 s

system_rebuild_theme_data: Time elapsed: 0.003462 s, total time: 3.564706 s

drupal_theme_rebuild: Time elapsed: 0.122944 s, total time: 3.68765 s

entity_info_cache_clear: Time elapsed: 0.001606 s, total time: 3.689256 s

node_types_rebuild: Time elapsed: 0.003054 s, total time: 3.69231 s

menu_rebuild: Time elapsed: 0.052984 s, total time: 3.745294 s

actions_synchronize: Time elapsed: 3.334542 s, total time: 7.079836 s

clearing cache_block: Time elapsed: 31.149723 s, total time: 38.229559 s

clearing cache_ctools_css: Time elapsed: 0.00618 s, total time: 38.235739 s

clearing cache_feeds_http: Time elapsed: 0.003292 s, total time: 38.239031 s

clearing cache_field: Time elapsed: 0.006714 s, total time: 38.245745 s

clearing cache_image: Time elapsed: 0.013317 s, total time: 38.259062 s

clearing cache_libraries: Time elapsed: 0.007708 s, total time: 38.26677 s

clearing cache_token: Time elapsed: 0.007837 s, total time: 38.274607 s

clearing cache_views: Time elapsed: 0.006798 s, total time: 38.281405 s

clearing cache_views_data: Time elapsed: 0.008569 s, total time: 38.289974 s

clearing cache: Time elapsed: 0.006926 s, total time: 38.2969 s

clearing cache_path: Time elapsed: 0.009662 s, total time: 38.306562 s

clearing cache_filter: Time elapsed: 0.007552 s, total time: 38.314114 s

clearing cache_bootstrap: Time elapsed: 0.005526 s, total time: 38.31964 s

clearing cache_page: Time elapsed: 0.009511 s, total time: 38.329151 s

hook_flush_caches: total time: 38.348554 s

Every cache cleared.

My initial response was to wonder how and why the cache_block would take so long. Then, however, I noticed line 59 above, which calls module_invoke_all('flush_caches'), which should have been obvious. Also, given that I was just looking for bottlenecks, I modified both module_invoke($module, $hook) in, as well as the time_elapsed to get the following:

function time_elapsed($comment,$force=FALSE) {
  static $time_elapsed_last = null;
  static $time_elapsed_start = null;
  static $last_action = null; // Stores the last action for the elapsed time message

  $unit="s"; $scale=1000000; // output in seconds
  $now = microtime(true);

  if ($time_elapsed_last != null) {
    $elapsed = round(($now - $time_elapsed_last)*1000000)/$scale;
    if ($elapsed > 1 || $force) {
      $total_time = round(($now - $time_elapsed_start)*1000000)/$scale;
      $msg = ($force)
        ? "$comment: "
        : "$last_action: Time elapsed: $elapsed $unit,";
      $msg .= " total time: $total_time $unit";
  } else {
  $time_elapsed_last = $now;
  $last_action = $comment;

/** From */
function module_invoke_all($hook) {
  $args = func_get_args();
  // Remove $hook from the arguments.
  $return = array();
  foreach (module_implements($hook) as $module) {
    $function = $module . '_' . $hook;
    if (function_exists($function)) {
      if ($hook == 'flush_caches') {
      $result = call_user_func_array($function, $args);
      if (isset($result) && is_array($result)) {
        $return = array_merge_recursive($return, $result);
      elseif (isset($result)) {
        $return[] = $result;

  return $return;

The results pointed to the expected culprits:

registry_rebuild: Time elapsed: 4.176781 s, total time: 4.182339 s

menu_rebuild: Time elapsed: 3.367128 s, total time: 7.691533 s

entity_flush_caches: Time elapsed: 22.899951 s, total time: 31.068898 s

features_flush_caches: Time elapsed: 7.656231 s, total time: 39.112933 s

hook_flush_caches: total time: 39.248036 s

Every cache cleared.

After a little digging into the features issue queue, I was delighted to find out that patches had already been committed to both modules (though entity api does not have it in the release yet, so you have to use the dev branch). Two module updates and two vsets later, I got the following results:

registry_rebuild: Time elapsed: 3.645328 s, total time: 3.649398 s

menu_rebuild: Time elapsed: 3.543039 s, total time: 7.378718 s

hook_flush_caches: total time: 8.266036 s

Every cache cleared.

Cache clearing nirvana reached!

Nov 24 2015
Nov 24

In Drupal 8, many functionality has been replaced by Plugins. The replacement for "hook_menu()" is the plugin Derivative (/d??r?v.?.t?v/) in combination with a "".

If you had static items in your hook_menu(), you can define them in the, if you had generated menu items, let's say from content, a Derivative Plugin is the new way to go.

You can define your own Derivative Plugin by creating the following folder in your module:

In that folder you can create the file:

In your PHP file you should create a class that extends DeriverBase and implements the ContainerDeriverInterface. The following example adds all nodes of type "page" to the main menu.

class MyModuleMenuLinkDerivative extends DeriverBase implements ContainerDeriverInterface {

   * {@inheritdoc}
  public static function create(ContainerInterface $container, $base_plugin_id) {
    return new static();

   * {@inheritdoc}
  public function getDerivativeDefinitions($base_plugin_definition) {
    $links = array();

    // Get all nodes of type page.
    $nodeQuery = \Drupal::entityQuery('node');
    $nodeQuery->condition('type', 'page');
    $nodeQuery->condition('status', TRUE);
    $ids = $nodeQuery->execute();
    $ids = array_values($ids);

    $nodes = Node::loadMultiple($ids);

    foreach($nodes as $node) {
      $links['mymodule_menulink_' . $node->id()] = [
          'title' => $node->get('title')->getString(),
          'menu_name' => 'main',
          'route_name' => 'entity.node.canonical',
          'route_parameters' => [
            'node' => $node->id(),
        ] + $base_plugin_definition;

    return $links;

Your "" in the root of your module should look like this:

  deriver: \Drupal\mymodule\Plugin\Derivative\MyModuleMenuLinkDerivative

If you want advanced stuff like configurable menuitems and tweaking the menu link cache tags/contexts, create the following folder in your module:

In that folder you can create the file MyModuleMenuLink.php.

class MyModuleMenuLink extends MenuLinkDefault implements ContainerFactoryPluginInterface {
  // Class overrides here.

In the root of your module change the content of "" to:

  deriver: \Drupal\mymodule\Plugin\Derivative\MyModuleMenuLinkDerivative
  class: \Drupal\mymodule\Plugin\Menu\MyModuleMenuLink

Here is the complete code of the basic example:

Here is the code of the advanced example with your own Menu Link Class:

Nov 24 2015
Nov 24

After over four years in development, the day finally arrived. November 19, 2015; the day when Drupal 8 was released. Of course this was a very good reason to celebrate. So all of our three locations, Zürich, Austin, and Cape Town joined the global community with hundreds of Drupalistas in cities around the world for the global Drupal 8 Release Party!

Cape Town festivities

Our office in South Africa's Mother City started the celebrations off. Even though he's Zürich-based, our CTO, Michael Schmid, was in town and held a presentation about Drupal 8, followed by fine food and drinks of course. Here are some impressions:




The Zürich party

Zürich was up next. We started our party with lightning talks about Drupal 8 where fellow Developers talked about the experiences they made with Drupal 8 so far. After that it was time for blue cocktails, beers, and cake (of course).

Cheers with our #Drupal8 cocktail @amazeelabs_zrh

— dagmita (@dagmita) 19. November 2015


Thanks for the cake @amazeelabs_zrh #Celebr8D8

— Mario Schnauss (@MSchnauss) 19. November 2015

Party in #ATX

Five long hours after Zürich and Cape Town finished off their last bites of cake, the Austin team got their party started, one they co-hosted with our friends from Four Kitchens

The highlights of the night included cold beer, Blacks BBQ, Drupal 8 conversations, raffle prize winners, a celebration cake, and a custom made Drupal 8 piñata that may or may not have been filled with travel-size adult beverages (nobody knows anymore...).

Picture of Austin's Drupal 8 release party at the Four Kitchens officePicture of celebratory cake and pinataPicture of piñata aftermathAnimated GIF of spinning piñataPicture of Kathryn and Maria holding remnants of piñata

Nov 24 2015
Nov 24

Much of the conversation in the Drupal 8 development cycle has focused on “NIH vs. PIE.” In Drupal 8 we have replaced a fear of anything “Not-Invented-Here” with an embrace of tools that were “Proudly-Invented Elsewhere.” In practice, this switch means removing “drupalisms,” sections of code created for Drupal that are understood only by (some) Drupal developers. In their place, we have added external libraries or conventions used by a much wider group of people. Drupal has become much clearer, much more explicit about its intentions by making this kind of change all over the codebase.

Replacing drupal_http_request with Guzzle

For instance we have gotten rid of our own “drupal_http_request()” and replaced it with the invented-elsewhere Guzzle library. Guzzle is a stand-alone tool written in PHP for making HTTP requests. Here is the change record that describes how code written in Drupal 7 would be modified to use the new library in Drupal 8. Ultimately it was Guzzle’s superior feature set that made it replace drupal_http_request. Guzzle can do a lot more than drupal_http_request and it does so in a much clearer fashion.

From the change record we see an example of Guzzle:

$client = \Drupal::httpClient();

$request = $client->createRequest('GET', $feed->url);

$request->addHeader('If-Modified-Since', gmdate(DATE_RFC1123, $last_fetched));


Compared to an example from Drupal 7:

$headers = array('If-Modified-Since' => gmdate(DATE_RFC1123, $last_fetched));

$result = drupal_http_request($feed->url, array('headers' => $headers));


The intentions of the code from Guzzle are much more explicit. The method “addHeader” is being called. Any developer could read that line and see what is happening. In the case of the Drupal 7 code the reader would be guessing. And sure, it might be easy enough to guess what drupal_http_request will do when it is passed multidimensional arrays. But it takes a lot of mental overhead for developers to think through the implication of each key within a multidimensional array.

It is not a coincidence that Guzzle, a library shared among many PHP projects, requires developers to be very clear about their intentions. Replacing drupal_http_request with Guzzle made Drupal’s code more explicit and comprehensible. There are numerous other examples where adopting or pursuing a concept from outside Drupal made Drupal itself clearer.

Classed Objects

Perhaps the clearest example of this shift is the switch to classed objects. Prior to Drupal 8, much of Drupal Core still showed its roots in PHP4 when support for object-oriented concepts was immature. Now instead of Drupal entities being represented simply as a “stdClass”, each entity type has its own class with defined properties and methods. Writing actual classes, methods, and interfaces encourages the Drupal community to think harder about what each entity is meant to do.

Drupal has a history of taking our “node” concept and bending it mercilessly to replace less developed parts of core. Anyone else remember usernode, which made a node for every user? When users and nodes were both shoehorned onto the same type of generic class it was easier to justify using one to make up for the shortcomings of the other and muddle our definitions by doing so. Actual classes force us to think more clearly.


Along with the usage of classed-objects has come the adoption of interfaces. One of the problems interfaces have helped solved is the pain of registering new functionality like field widgets. In Drupal 7 and prior, adding a new field formatter meant writing a set of hooks—some of which were absolutely required and others varied by use case. Writing hook_field_formatter_info implies writing hook_field_formatter_prepare_view, but it doesn’t require it. In Drupal 8 the we have Drupal\Core\Field\FormatterInterface to tell us exactly what is needed by a formatter and module writers can extend Drupal\Core\Field\FormatterBase to avoid rewriting boilerplate.

Interfaces also enable Drupal 8 to put services in a Dependency Injection Container. Again, Drupal is replacing NIH with PIE. Previous versions of Drupal assumed that any portion of code could access the global state at any time for any reason. This assumption implies that isolating a section of code so that it can be replaced is very difficult or impossible. In Drupal 8 many corners of Drupal, like breadcrumb handling, have be rewritten into a “service” which is carried in a “container”. The service should declare exactly what it depends and exactly what it will do (through an interface). This enables developers to replace one service with another version. Here is a detailed breakdown from Larry Garfield on how to do so with breadcrumbs. Again, for Drupal to use an outside concept, it’s own code must be more explicit.


Chasing an invented-elsewhere caching strategy has made Drupal’s internal caching system much clearer. Inspired by Facebook’s Big Pipe caching strategy, Drupal developers led by Fabian Franz and Wim Leers have made huge improvements to Drupal 8 caching. And of course they have made the caching system more explicit in doing so. The idea of Big Pipe is simple: make a page load fast by sending only a skeleton of markup and then filling in all of the separately-cached blocks of content. For that strategy to work, each block must be very explicit about how long it can be cached and which changes to context or content would invalidate it. Now, when rendering something, a developer is expected to explicitly state:

  1. The cache tags. These are data points like node ids. The idea is that if a node is resaved, every cache that declared the node’s id as a cache tag can be invalidated.

  2. The cache contexts. Some render elements vary based on more global concepts like the language of the current request/user or the role of the current user. This addition makes it much easier to do something like showing a full article to subscribers and a truncated article to anonymous users.

  3. The max-age of the cache. Some elements have to be regenerated every 10 minutes. Some can stay cached as long as their tags and contexts remain unchanged. Before Drupal 8, cache ages were much more of a guesswork operation.

For more information, watch this talk from Drupalcon Barcelona from Wim Leers and Fabian Franz.

Configuration Management

Drupal 8’s configuration management strategy is perhaps the most touted new feature and it brings plenty of PIE. The Configuration Management Initiative helped pave the way for standardized, exportable configuration; a concept developers from other systems expect as a matter of course. The discussions within the Configuration Management Initiative considered a number of different export formats like JSON and XML. YAML was ultimately chosen and that format is now used throughout Drupal for different purposes like the definition of services, libraries, CSS breakpoints and more. Even the drupalism of .info files in modules and themes has be replaced with this widely understood, invented-elsewhere format.

Additionally, core has taken concepts from CTools module for how configuration moves from the file system to the database to a cache. Now in Drupal 8, configuration can be exported to the file system (in .yml files) to be committed to git and moved across environments. The .yml files can then be imported into a database in a consistent fashion where they are stored and cached.

The main improvement over Drupal 7 is the consistency across Drupal subsystems. In Drupal 7, a developer had to remember that “overridden” meant one thing when Features module used that word in relation to Field configuration and “overridden” meant a slightly different thing when Views UI used it in relation to an exported View. By treating configuration in a consistent manner across subsystems the whole architecture becomes more cohesive.

Increased explicitness often was not the main goal of the above changes. The developers leading those changes often just wanted a better system that was easier to work with. In doing so, we’ve made a version of Drupal that should be clearer and more understandable to developers new to Drupal. We do not have to explain the difference between Filter module storage and Panels. We can just say “our configurations are all stored in the same way”. Drupal becomes more approachable and will travel further because of it.

Topics Drupal Planet, Drupal
Nov 24 2015
Nov 24

By Valentin Garcia 23 November 2015

Put your Drupal Site in Maintenance Mode Manually

Drupal allows to set a website offline with a few clicks via the admin interfacte.

However, we've seen situatuons where the admin interface becomes unavailable, often via a white screen of death.

In this tutorial, I'm going to show you a manual way to force your Drupal 7 site in maintenance mode.

Step #1. Edit the settings.php file

  • Edit the file sites/default/settings.php file, using a FTP client or through cPanel:
Put your Drupal Site in Maintenance Mode Manually
  • At the very end of settings.php, add the code below:
$conf['maintenance_mode'] = 1;

Step #2. End result

Your site will now display the "Site under maintenance" page:

Put your Drupal Site in Maintenance Mode Manually

Step #3. Put your site back online

Remove the line of code from Step 1, or change the value to 0 to put your site online again:

$conf['maintenance_mode'] = 0;

About the author

Valentín creates beautiful designs from amongst the tequila plants of Jalisco, Mexico. You can see Valentín's design work all over this site and you can often find him helping members in our support forum.

Nov 24 2015
Nov 24

The issue at hand

As most Drupal 6 site owners are aware, after a prolonged development period, Drupal 8 was officially released (8.0.0) last week on November 19th, 2015 Dries’s birthday with a corresponding many, many a lively party: Drupal 8 celebration #celebr8d8Like this fancy one in downtown Durham atop the rooftop bar of The Durham Hotel.

Drupal 8.0.0 is a BIG DEAL and generally speaking is great for the community of Drupal site owners and site builders.

However (there’s always a but), with the official release of Drupal 8, support for Drupal 6 will end on February 24th, 2016. Given the U.S. holiday season has begun, there is little productive time remaining to undertake a site upgrade before Drupal 6 End Of Life (EOL). If you are a site owner fortunate enough to have survived a Drupal site upgrade in the past, you are well aware that the upgrade process can be time-intensive for complex sites. It is never as easy as the click of a button. For most Drupal 6 site owners, it is the fact that their sites are so complex that they have avoided going through the upgrade process for as long as possible.

This presents responsible yet practical site owners who don’t have unlimited budgets with difficult decisions, each with associated pros and cons to weigh. In part 1 of this series, we’ll help walk you through the following topics at a high-level, with a follow-up post examining each topic in finer detail.

  • What does Drupal 6 EOL mean for me?
  • What are the risks to not upgrading?
  • What are my options?
  • Should we upgrade to Drupal 7 or Drupal 8?
  • How do we decide what to do?

N.B.: if you don’t fit into the aforementioned category of having budgetary constraints, please contact us immediately. ;)

What does Drupal 6 EOL mean for me?

Like any good (and probably the bad too) Drupal advisor will tell you, it all depends. Helpful, right? But truly, it’s necessary to understand the organization and its technical requirements very well to assess the risk of operating a Drupal 6 site after EOL. As an agency that leverages open source technology to build modern web applications, on a daily basis Savas relies on the Newtonian

…shoulders of giants…

to perform sophisticated tasks with the click of a button (or more likely a command in the terminal). That Drupal community that we access and contribute to for features is the same one that provides security maintenance. After EOL for Drupal 6, that click of a button access goes away both for features, but more importantly for security fixes. In other words, it means (almost) no one is watching Drupal 6 after February 24th 2015. For sustainability purposes, that huge community (~100,000 active contributors) must use its time and energy to support the newer platforms.

State of Drupal 6 sites in production

Drupal 6 has been around for a long time. As of mid November 2015 there are at least ~125,000 reported instances (likely underrepresented) of Drupal 6 sites in the wild. So you are not alone (…I am here with you…) and there is some comfort in that. If you’re reading this and have not begun your upgrade process yet, it is very likely you will be spending at least some time outside of support for your Drupal 6 site. We will dive into this at a deeper level in part 2, but some of the factors that are worth taking into consideration as you strategize the upgrade are:

Considerations to assess risk
  • How well-known is your organization?
    • Larger organizations with high public profiles are systematically targeted more frequently than smaller, lesser-known organizations.
  • How many contributed modules does your site utilize and how well supported are those modules?
    • Attack vectors that remain for Drupal 6 are likely to be modules that have not received a lot of historical support, but are in some way identifiable to the public when they are in use on a site.
  • How much does your site rely on custom code?
    • Custom code has the advantage of not being publicly known, but the large disadvantage of only being vetted by one site.

What are the risks?

High-level risks, more closely examined in part 2 are as follows from most severe to least.

  • Complete site compromise and control with consequences dictated by the whim of a hacker.
  • Site incompatibility with mandatory server security upgrades that fall out of sync with Drupal 6 (PHP 7 comes out in late 2015).
  • You do not keep up with modern web development practices. After all, Drupal 6 came out January 1st, 1970 (I just checked) and given that makes it older than 6 months on the web, it’s ancient.
  • You expose yourself to a decreasing market of developers that are able to serve you. With each major release, especially two in row (7 and 8) with significant architectural modifications to the former, skills honed in development for the current version of the software, provide diminishing returns the further back in versions you go.

What are my options?

In considering a Drupal 6 upgrade you have a few simple options.

  • Do nothing, and keep your fingers crossed.
  • Upgrade Drupal 6 core to a supported version (probably Drupal 7) and match existing functionality.
  • Engage a robust redesign/rebuild (Drupal 7 or 8).
    • Simultaneously harden the site to best mitigate attack vulnerabilities as the rebuild may take 6-18 months to complete.
  • Select a different solution than Drupal, and migrate to that.

Drupal 7 or Drupal 8… heck, what about Drupal 9?

This is another one that, I know…shocker, depends. The factors that effect this choice which we’ll discuss more in part 2 are:

  • Organizational tolerance for risk: Drupal 8 is less tested, and is inherently riskier earlier on in the life cycle of your site.
  • Willingness to support community: In some cases Drupal 8 contributed modules will need extra polish to be up to production snuff.
  • Complexity of site: Drupal 8 core has many more bells and whistles, but the contributed module landscape has a long way to catch up to Drupal 7.
  • What is the future/life of the site: Drupal 8 is much more forward-thinking in its approach, whereas though vetted, Drupal 7 is over 4 years old.
  • Existing developer’s skill set: Drupal 8 architecture, coding style (object oriented) and PFE (proudly found elsewhere) approach that leverages strengths from the rest of the PHP community all mark substantial shifts from Drupal 7. Therefore the skills required to succeed in these two realms differ.
  • Get out of here with that Drupal 9 talk! It’s neither prime nor even!

What is our recommendation?

If you feel lost in these concepts or with answering some of these questions on your own, it’s best that you speak with professionals who have years of experience maintaining and upgrading Drupal sites. The upgrade process is a highly variable one, and is not especially easy to estimate as it is much more nuanced than typical feature development.

Reaching EOL for your existing Drupal site is a time that we encourage site owners to look at the process like moving into a new and better home. It’s best to take the time to envision and create what you want in the new space, rather than thoughtlessly replicate what you had in the old. Why make a carbon copy when you had good reasons to make the move after all (even if you were technologically strong-armed by volunteers)? It’s very common to have features and custom development that have outlived their usefulness to your organization’s mission; so it’s a good time to purge. Out with the old, in with the new!

Having said that, the desire to preserve content from the existing site is very common and often necessary. There are advanced migration techniques available from Drupal 6 to Drupal 7 or Drupal 8 that may be entirely separated from the rest of the rebuild, so porting content and matching site functionality can be completely decoupled.

We love talking through this process with site owners. We analyze what makes the most sense for your organization while addressing priorities for both short and long term goals. We have been building sites in Drupal 8 since May 2015 and sites in Drupal 7 since 2010 so we are well versed to the pros and cons of each. Reach out to further discuss and stay tuned for parts 2, and 3.

About the author

Chris Russo

Recent posts by Savas

Slides from Tim’s talk on building web maps using Drupal at the 2015 NACIS conference.

Attending and sharing experiences from the All Things Open conference.

Using composer manager to manage a custom module’s dependencies.

Nov 23 2015
Nov 23

We couldn't be more excited to bring DrupalCon to India: it's a unique and colorful nation with many amazing cultures. Though the Con itself will be held in Mumbai, we strongly recommend that anyone traveling to India for DrupalCon take the time to tour some of the many fascinating regions of India.

How much does India have to offer? Our friends at Niswey illustrated the Druplicon on a tour of the country, experiencing four unique cultures that India has to offer. Here's the comic, and you can see more information on each frame below.

Druplicon Experiencing Indian Culture

Drupal Anna: Mind It! This frame represents the Druplicon in southern India, and he's wearing a lungi below his shirt. In the backdrop, there are coconut trees and a beach, which can be seen only in the southern states. Anna is a word people in the southern states (Kerala, Tamil Nadu, Andhra Pradesh, and so on) use to call out an elder brother or just to address a random person. The phrase "Mind it" is an expression made popular by the famous movie star Shah Rukh Khan, when he played a cameo of a south Indian star. "Mind It!" means: "You better get that, because I'm the boss." This is the attitude with which South Indian stars are often depicted in the movies.

(Want to know more about the southern states? You can watch the Lungi Dance, a fun song made as a tribute to Rajnikanth, one of the most famous southern movie stars and a cultural icon. You'll learn some cool dance moves, too!)

Drupal Paaji: Chak De Phatte! This frame represents the culture of Northern India, mainly Punjab and Delhi. He is doing Bhangra, a dance form most popular in the states of Northern India. Paaji is another word for a brother in Delhi and Punjab. In the backdrop, there are mustard fields, a common sight in the northern states. Chak de Phatte is an expression used in excitement or at celebrations in the region. 

Drupal Dada: Khoob Bhaalo! Drupal Dada (another word for brother) is shown as a person from Eastern India. He wearing kurta and dhoti, traditional attire of people from East Indian states. In the background, there is a yellow cab and a tram, common public transports, and the white building is Victoria Memorial, a landmark in Kolkata, the most populous city in Eastern India. Khoob Bhaalo (in Bengali, a regional language) means "very good" and is famous among people across India. 

Drupal Bhai: Jhakaas! Drupal Bhai represents Mumbai, the city where DrupalCon Asia will be held, and the most popular city in Western India. The Mumbai Drupal Bhai (or brother) is dressed like a flamboyant movie fan, because Mumbai is also the city of Bollywood, one of the biggest movie industries of the world. In the background, you'll find Gateway of India, a landmark in Mumbai. The word Jhakaas is an expression for anything that is fantastic. It is made popular by a movie star Anil Kapoor, the actor who played the game show host in Slumdog Millionaire. 

Hopefully, you've explored India a bit through the Druplicon's journey. Regardless of whether you go on a tour of your own, we hope to see you in Mumbai this February!

The Drupal Association has partnered with Niswey, an India-based marketing firm, to provide marketing materials for DrupalCon Asia. Every few weeks, we'll be sharing the blogs and comic strips that our Niswey friends have created in anticipation of the convention.

Nov 23 2015
Nov 23

This tutorial is part of the "Build a Blog in Drupal 8" series:

  1. Content types and Fields
  2. Adding Comments
  3. Using Views
  4. Managing Blocks
  5. Create and Manage Menus

A website's navigation plays an important part in how easy a site is to use. It's essential that you spend time fleshing out the overall IA (Information architecture) or you'll end up with a site that's hard to use and difficult to navigate through.

Previous versions of Drupal have always offered a simple interface for managing menus, and Drupal 8 is no exception.

In this tutorial, we'll continue building our site by adding in some menus. We'll create a custom menu which'll be used to display links to popular categories, then create an "About us" page and add a menu link to the footer.

This tutorial is part of the "Build a Blog in Drupal 8". Make sure you've read the previous tutorials if you want to follow along.

How to Manage Menus

When you install Drupal 8 using the Standard Installation profile, a menu called "Main navigation" is used as the primary site navigation with the single link called Home. Another, called Footer, is used in the footer region with the single link pointing to the contact form.

Fig 1.0

Create Menu

Let's now create a custom menu which'll display a list of curated popular categories. On a blog, these links will help you promote categories and send traffic to those pages.

1. Go to Structure, Menus and click on "Add menu".

2. Enter "Popular categories" into the Title field and "Curated list of categories." into "Administrative summary". Then click on Save.

Fig 1.1

Once it's been created, you'll be redirected to the menu edit page. You'll be able to manage all menu links from here.

Create Menu Link

1. Now click on "Add link".

2. Enter "Drupal content" into "Menu link title", enter in the taxonomy term URL to your Drupal category. On my site it's "/taxonomy/term/1" so I'll enter that in. Then in Description, enter in "Latest Drupal content."

Fig 1.2

The Link field is something which is new in Drupal 8 and it allows you to link directly to internal content pages by typing in the title of the content.

However, the autocomplete will only work for node content. You can't search for a taxonomy term. You can add one manually by adding the internal URI e.g, /taxonomy/term/1. You can also link to external pages or domains just by entering in the absolute URL.

Fig 1.3

Place Menu into Region

Now let's go and add our new menu into the footer.

1. Go to Structure, "Block layout" and click on "Place block" in the "Footer first" region. Search for "Popular categories" and click on "Place block".

Fig 1.4

2. Leave the "Configure block" modal pop-up as is and click on "Save block".

3. Don't forget to click on "Save blocks" at the bottom of the page.

4. Go to the homepage and in the footer region you should see the menu.

Fig 1.5

The menu links can be managed by hovering over it and clicking on "Edit menu".

Fig 1.6

Create About Us Page

Now we need to create an "About us" page and place a menu link in the main navigation so it's easily accessible. We'll add the menu link directly from the content edit form.

1. Go to Content, "Add content" and click on "Basic page".

2. In the Title field add "About us" and some text into the Body field.

Fig 1.7

3. In the right sidebar, click on the "Menu settings" vertical tab then check the "Provide a menu link" checkbox.

Fig 1.8

From this settings, you can set the menu link title; this is automatically pulled from the Title field. If you want text to appear when you hover over the link, then add it to the Description field.

The "Parent item" drop-down lets you select a parent menu link or a menu group. By default you can only add menus to the "Main navigation" but other menus can be accessible from this drop-down if configured.

Finally, the Weight field lets you control the order in which the links will be displayed. But I recommend you reorder the links from the menu page because it's much easier.

Once you've defined the menu link, scroll to the bottom and click on "Save and publish".

4. Now in the header you should see the "About us" before the Home link.

Fig 1.9

Reordering Menu Links

In the image above, "About us" appears before the Home link which is not ideal. Let's reorder the links so it's Home first then "About us".

To reorder the links you must go to the menu edit page and this can be done in two ways. First, you can edit the menu by going to Structure, Menus and by clicking on "Edit menu" in the "Main navigation" row. Another way, is to hover over the menu and click the edit icon then "Edit menu".

Fig 1.10

Once you're on the edit menu page, simply drag the Home menu to the top and then click on Save.

Fig 1.11

Now Home should be the first link in the menu.


From a site building standpoint, the UI for managing menus has stayed relatively the same. If you know how to manage menus in Drupal 7 then you should be fine in Drupal 8.

But from an architectural standpoint, the routing system in Drupal 8 has been rebuilt using Symfony components. If you want to learn more about the technical changes check out this great post, "What Happened to Hook_Menu in Drupal 8?".

Like what you see?

Join our free email list and receive the following:

  1. Discover our best tutorials in a 6 part series
  2. Be notified when free content is published
  3. Receive our monthly newsletter