Jan 26 2015
Jan 26

It’s hard enough trying to find cool websites in general, let alone cool websites made using Drupal. I’ve managed to find 5 that I’d like to highlight below:

Anthelios by Laroche-Posay

A standalone product page disguised as an educational site about UV protection by Laroche-Posay. It’s great seeing widely different uses for Drupal. This being mostly a single page design using a lot of creative overlays and animations as you scroll (and be prepared to scroll a lot to get through all of it). Art style is cartoony, yet inviting and goes very well with the content. 


I’ve always been a sucker for clean,minimalistic site design. LiveAreaLabs, a design firm based out of New York / Seattle, has made just that. Clean lines, liberal use of white space, smooth page and section transitions, and one of the more unique “hamburger-style” menu’s I’ve seen used on a responsive website. Everything about this site is just cool, even down to the way the logo adapts in color as you scroll across different background colors.


Yet another clean minimal site design by some creative folks in Poland. Their portfolio layout is great, they let their large visuals do all the communicating with just a list of services provided for each project. I find this a lot more powerful than using a bunch of words and paragraphs. They’ve also got a cool animated dot loading gif for page transitions. One minor gripe: wish the gif looped more seamlessly. 

Tyler School of Art

Here’s a website that has a lot of content but formats it well with interesting shapes, colors and textures. The site is fully responsive and carries its design language well across all resolutions. I had no trouble navigating around the massive site and really dug how they treated headings and typography, never felt lost. Kudos to the designers of this site.

82nd & Fifth

A spin-off from the Metropolitan Museum of Art, a showcase of 100 works of art curated over the course of a year. There are weekly episodes of highlighted art slideshows and intuitive uses for pinch & zoom for viewing the art up close. The site is also available as an app for iOS in 12 different languages.

Jan 26 2015
Jan 26

This is part two of a series on configuration management challenges in Durpal 8. Part 1 looked at challenges for small sites and distriubtions.

What is the state of support for distributions in Drupal 8?

Trying to gauge the state of anything in Drupal 8 has its inherent pitfalls. The software itself is still changing rapidly, and efforts in the contributed extensions space have barely begun. That said, various initiatives are in process.

For background on configuration management in Drupal 8, see the documentation on managing configuration and the configuration API. Drupal 8 configuration is divided between a system for one-off simple configuration like module settings and configuration entities that store items you may have from zero to any number of, like content types and views. The Drupal 8 handbook pages on configuration are useful but not fully up to date.

Two recent blog post series that provide background and technical details are:

The challenges

Distributions in Drupal can be divided into two main types:

  • Starter-kit distributions like Bear are designed to get you started in building a site that you then take in your own direction.
  • Full-featured distributions like Open Atrium or Open Outreach are designed to fill a use case and support upgrades.

This distinction is important in light of the Drupal 8 assumption that sites, not modules, own configuration. Starter-kit distros will work fine with this assumption, but for full-featured distros it presents major challenges; see part 1 of this series.

Configuration management in Drupal 8 is built primarily around the single-site staging or deployment problem rather than the requirements of distributions. Back in 2012 a discussion tried to assess what was needed to make Drupal 8 distribution-friendly, but it didn't get far.

Two types of tools look to be needed to fill the gaps.

  • Developer tools. Managing configuration in distributions will require exporting it into feature-like modules. Because any extension (module, theme, installation profile) can include configuration, most of the needs of distribution authors are a subset of what any extension developer will need. For example, the built-in Drupal 8 configuration export functionality is designed only for use on a single site.
  • Site tools. Since Drupal core's single-site configuration management model conflicts with the requirements of updatable distributions, specialized modules will be needed to provide distribution-based sites with the ability to receive configuration updates.

Emerging solutions

  • Features 8.x
    Some of the first efforts to provide distribution-related functionality came in the form of sketches towards a Drupal 8 version of the Features module. The sandbox module contains a small collection of methods that can be called from the Drupal command line utility Drush for editing and reverting configuration modules.
  • Configuration Development
    Configuration Development provides automated import and export of configuration between the active configuration storage and exported modules.
  • Configuration Revert
    The sandbox Configuration Revert project provides a set of reports that allows you to see the differences between the configuration items provided by the current versions of your installed modules, themes, and install profile, and the configuration on your site. From these reports, you can also import new configuration provided by updates, and revert your site configuration to the provided values.
  • Configuration Packager
    Configuration Packager enables the packaging of site configuration into modules, like Features for Drupal 7. Rather than producing manually authored individual features, Configuration Packager analyzes the site configuration and automatically divides it up into configuration modules based on configured preferences.

Remaining work and coordinating efforts

See the issue META: Required functionality for Drupal 8 distributions for an initial inventory of the work outstanding to prepare for Drupal 8 distributions.

As usual, the main challenges are probably not so much technical as strategic and organizational. To prepare the way for Drupal 8 distributions, we need to coordinate to understand barriers, explore solutions, and pool efforts.

Part of this work will be developing shared, generic tool sets. Already, there's a lot of work in modules like Features 8.x and Configuration Packager that isn't specific to features or packages of configuration and would better be merged into a more generic solution; see the issues #2383959, #2405015, and #2407609. Configuration Development is the most likely candidate (#2388253), though there are some outstanding issues.

Interested in helping? Please comment on and help flesh out the meta issue or the issues and projects referenced there.

Packaging configuration

My own efforts have been focused recently on taking a fresh approach to packaging configuration in Drupal 8 in the Configuration Packager module. In my next post in this series, I'll introduce that project.

Jan 26 2015
Jan 26

The Kickstarter campaign was not funded, but that does not mean that it was not successful! We are still moving ahead. I've just published my first course on Udemy and would like to get pilot members to provide feedback so that I can make sure the course ends up being world class.

Here is a coupon code to access the course for free:

The course introduction provides more details about the planned direction for the training. So, I won't repeat it all here. Suffice it to say that I am still planning to follow the Ridiculously Open Online Self Training Site philosophy.

Udemy requires that all coupons have a quantity specified. I have set the code to allow 250 redemptions. I'll update this post if the coupons "sell out."

Jan 26 2015
Jan 26

In the previous article we've seen how we can interact programatically with Views in Drupal 8 in order to create a custom field in our Views results. Today, we will be looking a bit at how we can create a custom filter you can then add to the View in the UI and influence the results based on that.

Filters in Views have to do with the query being run by Views on the base table. Every filter plugin is responsible with adding various clauses in this query in an attempt to limit the results. Some (probably most) take on configuration parameters so you can specify in the UI how the filtering should be done.

If you remember from the last article, to create our field we extended the FieldPluginBase class. Similarly, for filters, there is a FilterPluginBase class that we can extend to create our own custom filter. Luckily though, Views also provides a bunch of plugins that extend the base one and which we can use or extend to make our lives even easier. For example, there is a BooleanOperator class that provides a lot of the functionality needed for this type of filter. Similarly, there is an InOperator class, a String class, etc. You can find them all inside the views/src/Plugin/views/filter directory of the Views core module.

In this tutorial, we will create 2 custom filters. One will be a very simple one that won't even require creating a new class. The second one will be slightly more complex and for which we will create our own plugin.

The code we write will go in the same module we started in the previous article and that can be found in this repository.

Node type filter

The first filter we will write is very simple. We want to be able to filter our node results by the machine name of the node type. By default, we can use a filter in which we select which node types to be included. Let's say, for the sake of argument, we want a more complex one, such as the one available for a regular text value like the title. The String class will be perfect for this and will provide actually 100% of our needs.

So let's go to our hook_views_data_alter() implementation and add a new filter:


$data['node_field_data']['node_type_filter'] = array(
  'title' => t('Enhanced node type filter'),
  'filter' => array(
    'title' => t('Enhanced node type filter'),
    'help' => t('Provides a custom filter for nodes by their type.'),
    'field' => 'type',
    'id' => 'string'


Since the table that we are interested in altering the query for is the node_field_data table, that is what we are extending with our new filter. Under the filter key we have some basic info + the id of the plugin used to perform this task. Since our needs are very simple, we can directly use the String plugin without us having to extend it. The most important thing here though is the field key (under filter). This is where we specify that our node_type_filter field (which is obviously a non-existent table column) should be treated as being the type column on the node_field_data table. So, by default, the query alter happens on that column. And this way we don't have to worry about anything else, the String plugin will take care of everything. If we didn't specify that, we would have to extend the plugin and make sure the query happens on the right column.

And that's it. You can clear your cache, create a View with nodes of multiple types and add the Enhanced node type filter to it. In its configuration you'll have many matching options such as equals, contains, does not contain etc you can use. For example, you can use contains and specify the letters art in order to return results whose node type machine name contain these letters.

Node title filter

The second custom filter we build will allow Views UI users to filter the node results by their title from a list of possibilities. In other words, they will have a list of checkboxes which will make it possible to include/exclude various node titles from the result set.

Like before, we need to declare our filter inside the hook_views_data_alter() implementation:


$data['node_field_data']['nodes_titles'] = array(
  'title' => t('Node titles'),
  'filter' => array(
    'title' => t('Node titles'),
    'help' => t('Specify a list of titles a node can have.'),
    'field' => 'title',
    'id' => 'd8views_node_titles'


Since we are filtering on the title column, we are extending again on the node_field_data table but with the title column as the real field to be used. Additionally, this time we are creating a plugin to handle the filtering identified as d8views_node_titles. Now it follows to create this class:


valueTitle = t('Allowed node titles');
    $this->definition['options callback'] = array($this, 'generateOptions');

   * Override the query so that no filtering takes place if the user doesn't
   * select any options.
  public function query() {
    if (!empty($this->value)) {

   * Skip validation if no options have been chosen so we can use it as a
   * non-filter.
  public function validate() {
    if (!empty($this->value)) {

   * Helper function that generates the options.
   * @return array
  public function generateOptions() {
    // Array keys are used to compare with the table field values.
    return array(
      'my title' => 'my title',
      'another title' => 'another title',


Since we want our filter to be of a type that allows users to select from a list of options to be included in the results, we are extending from the InOperator plugin. The class is identified with the @ViewsFilter("d8views_node_titles") annotation (the id we specified in the hook_views_data_alter() implementation).

Inside our plugin, we override three methods:

Inside init(), we specify the title of the set of filter options and the callback that generates the values for options. This callback has to be a callable and in this case we opted for the generateOptions() method on this class. The latter just returns an array of options to be presented for the users, the keys of which being used in the query alteration. Alternatively, we could have also directly created the options inside the init() method by filling up the $this->valueOptions property with our available titles. Using a callback is cleaner though as you can perform various logic in there responsible for delivering the necessary node titles.

The point of overriding the query() and validate() methods was to prevent a query and validation from happening in case the user created the filter without selecting any title. This way the filter has no effect on the results rather than returning 0 results. It's a simple preference meant to illustrate how you can override various functionality to tailor your plugins to fit your needs.

And that's it. You can add the Node titles filter and check the box next to the titles you want to allow in the results.


In this article we've looked at how we can create custom filters in Drupal 8 Views. We've seen what are the steps to achieve this and looked at a couple of the existing plugins that are used across the framework and which you can use as is or extend from.

The best way to learn how all these work is by studying the code in those plugin classes. You will see if they are enough for what you want to build or extending them makes sense. In the next article we are going to look at some other Views plugins, so stay tuned.

Jan 25 2015
Jan 25

Now that we are few weeks into 2015, we’d like to look back at 2014 and share some interesting numbers about


Last year received almost 48.9 million visits from 21.2 million unique visitors. The spike around September/October is due to spam-related traffic, and, of course, DrupalCon Amsterdam.


152,200 users logged in to at least once during the year. Out of those, 31,466 users performed at least one activity on the site, such as commented, created a node or committed code.

More than 21,500 people left a comment or more in the issue queues. More than 4,000 people commented in the Drupal core issue queue.


Overall 145,907 commits happened on, with more than 4,000 commits to Drupal core specifically.

More than 3,200 people committed code to contributed projects (not counting Drupal core), with an average of 37.43 commits per user.

More than 1,400 people got commit mention in Drupal core patches.

Comments & Issues

Our users left 569,217 comments, 94% of them were comments in the issue queues. 30% of all comments in the issue queues happened in Drupal core queue.

On average there were 22.4 comments per user, with 38.74 comments per user in the Drupal core issue queue.

Our users created 78,505 issues, with an average of 4.55 issues per user.

5,192 contributed projects were created on in 2014. 31% of those are sandbox projects.


On the infrastructure side our uptime was 99.97% over 12 months, and the average full page load time for the year is 3.64 across It improved throughout the year; we are down to 3.08 as an average for December. Our time to first byte response was 1,374ms in January; we are down to 441ms for December. testbots tested over 33,300 patches. An average test queue and test duration times for Drupal 8 core were about 35 minutes each.


On support front 82% of issues in issue queues got a response within 48 hours after being created.

An average response time (time between an issue was created and first comment not by issue author) across all issue queues on was 82.87 hours. For Drupal core issue queue this number was 60.68 hours. For related queues 34.19 hours.

* * *

Full stats you can find in the 2014 stats spreadsheet.

Compared to 2013 some of the user activity numbers go down, which is directly related to the phase of the Drupal release cycle. Right after Drupal 7 release user activity peaked and then was slowly going down as Drupal 7 and contrib ecosystem matured. We are looking forward to Drupal 8 release! In the recent Drupal Association community survey about 80% of respondents said they have firm plans to adopt Drupal 8, suggesting that release will cause a huge boost in user activity on

2014 was a great year, and thank you for spending some part of it on! We are excited to see what 2015 will bring.

Jan 24 2015
Jan 24

Agency and online customer Use Case & experiences is a 2nd generation Platform as a Service (PaaS). It accelerates your PHP/Drupal/Symfony based project development and reduces the risk of moving new features into live. Some customers are seeing circa. 40% reduction in project budgets and revenue loss prevention, whilst gaining huge improvements in developer productivity, eliminating environmental resource management and reducing live downtime to zero, all at commodity hosting prices! For an Agency providing web development, commerce and hosting services, or the end customer themselves, understanding the detail behind these very powerful messages is an important factor to making the right decisions around the critical tools and technologies that impact their business, especially if say the pricing structure appears to be a little higher than the known alternatives.

There is a huge amount of eCommerce experience built into

Commerce Guys are involved in many leading edge developments that are pushing the boundaries of how eCommerce is being utilized and evolved to meet new business models, many of which are tied into faster development, more frequent changes and better uptime. These include the migration of offline customers into advanced online purchase environments; encouraging said customers to spend more money whilst at the same time becoming less expensive to support, requiring tighter integrations of support and customer care functions; also important is the delivery of B2C-like experience for B2B customers; as well as defining online and mobile strategies in conjunction with each other; Drupal 8, Distributions etc. etc.

What gives Commerce Guys the credibility to offer such a convincing project development tool? We are a commercial software vendor, and we’ve invested several $m into building the Drupal Commerce application and its Kickstart distribution (deployed into over 50,000 active sites), so we know how to develop successful software products on an industrial scale. Of further relevance is the deep involvement we have in so many of our partner projects each year, providing analysis, design authority and development skills that puts us in the middle of hundreds of individual and unique development processes ! What we have engineered into the heart of is the flexibility to overcome the big problems and common manual activities that hold project teams back.

Different customers, all with common problems

Let’s take a look at a handful of typical eCommerce customers, and work through their issues:

  • A Digital Agency (DA) with a global pharmaceutical client who has many simple but different web-shop brands across 18 European countries.
  • A Systems Integrator (SI) with a high street optician as their customer, with an eCommerce system covering 14 territories. They have all the usual requirements of a high end client plus an unusually complex hosted infrastructure accommodating various index sites and 10 plus environments in each location, totalling 150 service instances.
  • A Retail Fashion client rolling out a Distribution based eCommerce system to 4 geographies.
  • A pureplay online marketing business providing 4,000 products through a Social Media community exceeding 200,000 people in 22 countries around the world, of which the mobile traffic accounts for over 70% of their revenues.

And although both the Agency and the Integrator are at the high end of technical capability, and the 2 retailers have way less experience, they all have similar sets of problems that only seems to be able to solve.

Complex eCommerce applications versus simple brochure-ware sites

To properly emphasize the advantages that brings to an eCommerce system, we first need to draw a comparison between the complex and transactional nature of these customers’ applications, and that they usually work differently in each country, and as such require various different code bases. By comparison, these are very different for example, to brochure-ware sites with a central content repository, combined with simple language differences plus content change workflow pushed out through a multi-site architecture.

Typical lifecycle issues that all 4 of these online businesses worry about

To start with, the development process differences between these two project personalities (multi-region eCommerce and multi-site brochure ware) are significant, the differences being 1) many more environments through which the upstream movement of code is being managed, 2) a much longer code-test-production timeframe, 3) bigger testing overheads (including tools, time and people), 4) complex content approval workflows, 5) higher consequential management costs, and 6) a severe risk impact of changes not working in production and feature release delays due to poor Continuous Integration (CI).

All the above are directly related to revenue loss - exacerbated by reputational damage in severe circumstances – which of course make them fairly unique to eCommerce. The effects on cost, time and business risk all increase exponentially when considering multi-country implementations.

What does for eCommerce that nobody else can ! solves many problems specific to this eCommerce Use Case, as well as easing various issues that make such projects more expensive to deliver and very laborious to manage,as follows:

  1. Many development process issues are greatly affected, resulting in a significantly reduced number of coding errors due to inconsistent environments, and greatly reduced elapsed times in the code delivery process from local environment through test, staging and user sign-off.
  2. Hugely improved Continuous Integration (CI) process that speeds up the change process for similar features across multiple environments into different local production services.
  3. True Continuous Delivery (CD) now becomes possible because the process no longer requires large number of changes to be bundled up and tested together before going to production say every 6-8 weeks. In this new regime, even the smallest of changes can whistle through in less than 60 minutes, which is vital for changes to aspects of the ‘Sale Offer’ during peak season, modifying coupon functionality for instance, or making micro changes during the advertising campaign.
  4. Steep cost reductions associated with maintaining multiple static environments (because re-creating from staging for new development environments isn’t possible or takes too long). Developers now have the power to create and destroy their own full-stack environments that mirror staging or say the master.

We’ve learned from various retailers using in the run up to holiday periods and promotions (especially Black Friday, Cyber Monday and December 26th) that the reduced risk of making changes into live offered by, plus the triple redundancy we provide in the Platform Enterprise (PE) offering with its ability to seamlessly upscale around traffic peaks are all regarded as extremely valuable to their business, the combination of which simply cannot be provided by alternate vendors ! This makes a must for any mission critical eCommerce site.

Jan 24 2015
Jan 24

Think you’ve got Drupal or web smarts? We’re seeking mind-blowingly good sessions for DrupalCon Los Angeles, and want to hear from you about what you know best.

You don’t have to be the best in everything, but if there’s one topic you know inside and out, you should submit a session.

We’re looking for topics for the following tracks:

Submit a Session

Propose a Training

If you’re a company or individual who excels in Drupal training and want to bring your expertise to our con, we encourage you to propose a training for our Monday lineup.

Propose a Training

Send us proposals for your most fun and interesting session ideas! The call for content ends February 27th at midnight Los Angeles local time (UTC -8).

See you in Los Angeles!

Jan 23 2015
Jan 23

Google Summer of Code 2015 is approaching and few people started asking me about how to get selected in GSoC 2015 and where to start. So I though to go ahead and write a blog post so that others can also benefit. This post targets students who have never participated in GSoC before and want to know how to get started with the application process and open source in general.

Google Summer of Code 2015 logo

What is Google Summer of Code? How it works?

The GSoC FAQ page should suffice to answer most of your queries and I strongly suggest to go through it before looking anywhere else for answers.

Google Summer of Code is a program that offers student developers stipends to write code for various open source projects. We work with many open source, free software, and technology-related groups to identify and fund projects over a three month period. Since its inception in 2005, the program has brought together over 8,500 successful student participants from over countries and over 8,000 mentors from 109 countries worldwide to produce over 55 million lines of code.

So, basically this is how it works:

  • Different orgs (open source organizations) submit their applications to be part of the program and Google chooses about 190 of those based on their application and past record.
  • Once the orgs are selected, the list will be available on Melange. Each org will have an ideas list and a homepage.
  • You need to choose one of the ideas from the list on the ideas page and submit your proposal. (Details on this below)
  • Then you wait for Google to announce the list of selected proposals. If you find your proposal there, then the hardest part is over and now you code with your org for about three months and complete the proposed project.
  • If everything went smoothly so far, you'll get a handsome paycheck for your contribution and you'd have learnt a lot about your project, org and open source.

There are so many orgs, which one do I choose?

This is probably the single most asked question every year around this time. The answer is pretty straightforward if you're already involved with any open source organization and want to continue work with the same org, then go for that one. If the answer to the previous question is no (which might be the case for most of you reading this post), then you need to choose a few orgs from the list of all accepted orgs. Although you will finally work with only one org, it might be a nice idea to select 1-3 orgs to which you may submit your proposals. You can shortlist the orgs based based on tags, for example if you're familiar with C++, you can filter the orgs which have the C++ tags mentioned on Melange.

If the org list of this not out yet, you can look at the list of orgs which participated in GSoC last year. For instance, you can take a look at the list of orgs which took part in 2014 and 2013. Filter the orgs based on the tags you're either familiar with or want to work on. Orgs which participated in previous years and took in more than a couple of students are more likely to get accepted again this year. Based on this and your favorite tags, you filter out 1-3 orgs.

After this, the next task is to go through the idea list for those orgs and decide what ideas interest you most. If you don't fully understand the ideas, it's completely fine and the next step will be to get your doubts cleared up by contacting the org and/or the mentor of the task (more on this in the next section).

Okay, I've decided an org and project idea, what do I do next?

Once you've decided what project idea interests you most and some parts of the description are either unclear to you or you want to clarify a few details, you should get in touch with the task mentor and the organization in general. All the orgs have a contact section on Melange which will tell you how to contact the org. Most orgs prefer communication either via IRC or mailing lists so you can get in touch with the org. You can also ping the task mentor in IRC or mail him to clarify any doubts that you might have regarding the project.

Although its not compulsory, its usually a good idea to contribute to the org before sending your proposal. In order to that, you can ask questions like "Hey I'm new here, can anyone help me get started on how to contribute." either on IRC or the mailing lists. Since orgs get asked such questions very frequently, many of those have a 'Getting Started' page and if it'll be very helpful if you find that page and follow the instructions. If you've any doubts don't hesitate to ask those. Mentors are generally nice people and will help you through.

How to start contributing

Contributing to an org means either helping to fix bugs (issues), writing documentation or doing testing etc. All the orgs use an issue tracker to keep track of their issues/bugs and most of those orgs have a novice/beginner/quick-fix tag which lists tasks which are easy to fix for beginners. You can get more info on that by contacting the org. Contributing to open source is fun and if you're not having fun, you're doing it wrong.

Writing a good proposal

Once you've finalized the project idea, and have got started contributing to the org, the next and the most important step is to write a proposal. Many orgs have a application template of sorts and if your org has one, you need to follow that. Otherwise, you can start by specifying your personal information and then moving on to project description. Following are a few tips for writing your project proposal:

  • Include a detailed timeline based on how you intend to complete the project.
  • Make sure to list any bugs you've worked on and/or links to your contributions.
  • Double, actually triple check for spelling mistakes.
  • Don't forget to mention your contact info.
  • Last but not the least, don't forget to update Melange with your latest proposal.

Once your proposal is ready, you can ask the task mentor (and/or the org admin) to review it before you submit it finally to Melange. Ask them if you could explain any parts of it in a better manner and follow up on their feedback. The most important part is really understanding the project idea and reflecting that in your proposal.

Some Do's and Don'ts

Following are some miscellaneous tips for communicating with your org in a better manner:

  1. Don't ask to ask: Don't hesitate to ask any questions and its much better than asking something like "Hello! I ran into an isuue, can anyone help me?" Instead you're more likely to get a helpful answer by asking your real question instead of asking to ask your question.

  2. Be patient and don't spam: Once you've asked your question, wait for some time for someone to answer it. Its not a good idea to spam the channel again and again with the same question at short intervals.

  3. Mentors are humans (and volunteers): After mailing a mentor, at least wait for 48 hours for them to reply. You need to understand that they are humans and most of them contribute in their volunteer time.

  4. Use proper English language: Its really not a good idea to use SMS language while communicating on IRC or mailing lists. Also, note that excessive use of question marks is frowned upon. Although you need to be respectful, but addressing mentors as Sir/Ma'am is not such a great idea.

Final words

If you follow the steps mentioned above sincerely, you'll have a great chance of getting selected into GSoC this year. If you have any doubts, feel free to ask those in comments below.

PS: A little background about me

I was a Google Summer of Code student with Drupal in 2014 and org admin for Drupal in Google Code-In 2014.

Jan 23 2015
Jan 23

DrupalCamp DelhiWhile we know there are over 33,000 Drupal developers around the globe, I had no idea how strong Drupal was in India until I was there with Rachel Friesen, scouting locations for a possible DrupaCon Asia. By meeting with the community at camps, meetups, and dinners, we saw first hand how strongly India is innovating with Drupal and contributing back to the Project.

When it comes to geographic referrals, India is second in driving traffic to However, they aren’t second in contributions, but things are changing. I was especially impressed with the relationship between Tata Consultancy Services (TCS) and Pfizer, a $51.5B life sciences company. Pfizer allows TCS to contribute their code, which is often not allowed for legal reasons. Since contributing back is a one of Pfizer’s top values, they asked TCS to make contribution part of their culture - and they did. At TCS, Rachit Gupta has created contribution programs that teach staff how to contribute and gives them time during work hours each week to contribute code. With a staff of several hundred developers, this can make TCS become a mighty contribution engine for the Project.

I’m equally impressed by other Indian web development consulting agencies that I met like Axelerant, Blisstering Solutions, Kellton Tech, and Srijan, who also have a contribution culture in their organizations. They even set up KPIs around staff contributions to make sure they are keeping this initiative top of mind.

While India celebrates its 68th birthday on January 25, it’s a time to celebrate its growth as a nation-- and, in its own way, Drupal has a hand in the country’s prosperity., a Drupal job search site, shows there are over 15,000 Drupal jobs in India.  All of the companies I talked to are growing their teams to meet that demand. Imagine if this contribution culture is fully embraced by Indian web development companies? The impact on the Project will be significant.

Individuals are also stepping up to support the Project and there is a passion for contribution that is spreading. I keynoted DrupalCamp Delhi, where over 1,000 people registered and 575 people attended. I saw first hand how dedicated the organizers were to make the event informative and fun. Several sprint mentors were on hand to lead more than 75 people through a full day sprint. Plus, the following weekend was Global Sprint Weekend and sprints popped up all over India in Bangalore, Chennai, Delhi, Goa, Hyderabad and Pune.

Not only are Drupalers in India helping the Project, but they are also using Drupal to create change in India with leapfrog solutions that give Indians access to more digital services. For example, many villages don’t have access to products found in major cities due to lack of infrastructure. The village stores simply can’t scale to buy and hold large quantities of inventory.

Iksula, an Indian eRetail consulting agency,  created a headless Drupal solution for Big Bazaar, India’s largest hypermarket, which provides lightweight tablets for store owners throughout India. Using those tablets, villagers can go into their local store and buy their goods online. The products are delivered to the shop owner, who hand delivers products to the consumer, giving people easier access to goods that can improve their quality of life.

As another example, we can look at IIT Bombay, India’s top engineering university, which uses Drupal at the departmental level. Professors P Sunthar and Kannan are taking Drupal to the masses by creating a MOOC in conjunction with MIT’s EDx. The work is funded by a government initiative called FOSSEE (Free and Open Source Software for Education), and through it, Indian university students can watch videos on several open source technologies, including Drupal.

The initiative bridges learning divides by providing the trainings in several languages found throughout India and provides low cost tablets for students who do not have a personal computer. This well thought-out program can help students learn the tools faster to meet the needs of of future employers. 

India has clearly embraced Drupal. They are making innovative solutions with the software and they are learning to contribute that back to the Project. Its for these reasons we want to host DrupalCon Asia. It will be a chance to highlight India’s Drupal talent and accelerate their adoption of a contribution culture.

A huge thank you to Chakrapani R, Hussain Abbas, Rahul Dewal, Jacob Singh, Mayank Chadha, Parth Gohil, Ankur Gupta, Piyush Poddar, Karanjit Singh, Mahesh Bukka, Vishal Singhal, Ani Gupta, Rachit Gupta, Sunit Gala, Professor P Sunthar and all the other community members who helped organize our trip to India. I’m personally moved and professionally inspired by all that you do.

Image credit to DrupalCamp Delhi

Jan 23 2015
Jan 23

DrupalCamp DelhiWhile we know there are over 33,000 Drupal developers around the globe, I had no idea how strong Drupal was in India until I was there with Rachel Friesen, scouting locations for a possible DrupaCon Asia. By meeting with the community at camps, meetups, and dinners, we saw first hand how strongly India is innovating with Drupal and contributing back to the Project.

When it comes to geographic referrals, India is second in driving traffic to However, they aren’t second in contributions, but things are changing. I was especially impressed with the relationship between Tata Consultancy Services (TCS) and Pfizer, a $51.5B life sciences company. Pfizer allows TCS to contribute their code, which is often not allowed for legal reasons. Since contributing back is a one of Pfizer’s top values, they asked TCS to make contribution part of their culture - and they did. At TCS, Rachit Gupta has created contribution programs that teach staff how to contribute and gives them time during work hours each week to contribute code. With a staff of several hundred developers, this can make TCS become a mighty contribution engine for the Project.

I’m equally impressed by other Indian web development consulting agencies that I met like Axelerant, Blisstering Solutions, Kellton Tech, and Srijan, who also have a contribution culture in their organizations. They even set up KPIs around staff contributions to make sure they are keeping this initiative top of mind.

While India celebrates its 68th birthday on January 25, it’s a time to celebrate its growth as a nation-- and, in its own way, Drupal has a hand in the country’s prosperity., a Drupal job search site, shows there are over 15,000 Drupal jobs in India.  All of the companies I talked to are growing their teams to meet that demand. Imagine if this contribution culture is fully embraced by Indian web development companies? The impact on the Project will be significant.

Individuals are also stepping up to support the Project and there is a passion for contribution that is spreading. I keynoted DrupalCamp Delhi, where over 1,000 people registered and 575 people attended. I saw first hand how dedicated the organizers were to make the event informative and fun. Several sprint mentors were on hand to lead more than 75 people through a full day sprint. Plus, the following weekend was Global Sprint Weekend and sprints popped up all over India in Bangalore, Chennai, Delhi, Goa, Hyderabad and Pune.

Not only are Drupalers in India helping the Project, but they are also using Drupal to create change in India with leapfrog solutions that give Indians access to more digital services. For example, many villages don’t have access to products found in major cities due to lack of infrastructure. The village stores simply can’t scale to buy and hold large quantities of inventory.

Iksula, an Indian eRetail consulting agency,  created a headless Drupal solution for Big Bazaar, India’s largest hypermarket, which provides lightweight tablets for store owners throughout India. Using those tablets, villagers can go into their local store and buy their goods online. The products are delivered to the shop owner, who hand delivers products to the consumer, giving people easier access to goods that can improve their quality of life.

As another example, we can look at IIT Bombay, India’s top engineering university, which uses Drupal at the departmental level. Professors P Sunthar and Kannan are taking Drupal to the masses by creating a MOOC in conjunction with MIT’s EDx. The work is funded by a government initiative called FOSSEE (Free and Open Source Software for Education), and through it, Indian university students can watch videos on several open source technologies, including Drupal.

The initiative bridges learning divides by providing the trainings in several languages found throughout India and provides low cost tablets for students who do not have a personal computer. This well thought-out program can help students learn the tools faster to meet the needs of of future employers. 

India has clearly embraced Drupal. They are making innovative solutions with the software and they are learning to contribute that back to the Project. Its for these reasons we want to host DrupalCon Asia. It will be a chance to highlight India’s Drupal talent and accelerate their adoption of a contribution culture.

A huge thank you to Chakrapani R, Hussain Abbas, Rahul Dewal, Jacob Singh, Mayank Chadha, Parth Gohil, Ankur Gupta, Piyush Poddar, Karanjit Singh, Mahesh Bukka, Vishal Singhal, Ani Gupta, Rachit Gupta, Sunit Gala, Professor P Sunthar and all the other community members who helped organize our trip to India. I’m personally moved and professionally inspired by all that you do.

Image credit to DrupalCamp Delhi

Jan 23 2015
Jan 23

So, here at Lucius HQ we are planning on building a RESTful API (web services) on top on Drupal distribution OpenLucius.

We want to do this so all 3rd party programmers, thus 3rd party applications, can integrate with OpenLucius. And not only Drupal developers and Drupal modules. 

For example: integrate time tracking with Toggle, invoicing with Freshbooks or integrating with other case trackers like Jira, Asana or Basecamp. And there are a lot more apps out there with huge potential you can tap into.

So, a brief intro in web services in Drupal:

What is a web service API

W3C defines a web service as follows:

‘A Web service is a software system designed to support interoperable machine-to-machine interaction over a network’.

In other words: web services is a documented and defined way for two computers to communicate with each other over the internet. A computer can be anything connected to the internet. So even a Playstation, a smart watch or a thermostat. Think of: 'the internet of things'.

Application Programmer Interface

API means: Application Programmer Interface. API’s define how software and thus computers can communicate with each other. We are constantly using API’s without noticing it. For example, when you attach your laptop to an external monitor. This is done by means of an API. The programs are 'communicating' with each other without any human intervention.

Standardized and documented

An API can also be seen as a standardized and documented way to get access to content and functionality of an application. For example, as an independent developer you can get access to data from Facebook using the Facebook API. An example of Facebook’s ‘Graph API explorer’ that everyone can use:

API documentation

API Documentation is very important, otherwise nobody will know how to use the API to obtain the correct information. An API is worthless without proper documentation.

How does it basically work

An external application makes a request for data through a Drupal web service API. Drupal passes data back in the appropriate structured way (e.g. JSON), so that the external application can use the data. The external program can also create users, create a node, reset a password, etc.

Why web services

In most cases web services are used to provide mobile applications with data. When you look for example at the App, then these news items will have to be managed somewhere. These same news items are posted on their website, but also on Android apps and in future maybe on smart TV’s, smart watches and anything else yet to be invented.

Future proof

Web services are future oriented: whatever will come after iOS or Android, the new application platform will also be able to retrieve and modify data via the desired web service API.

In other words: the internet of things can be centrally provided with content, users, etc.

Web services in Drupal

There are several modules in Drupal that can facilitate web services, the most famous are Restws and Services.

These two modules ensure that data and internal functions are openly served to other applications through a Drupal web services API. An external application can 'communicate with these modules' and receive structured data that can be used. Examples of external applications: an iOS or Android app, but also a Playstation, smart TV, smart watch or even a thermostat. In other words, all things in the internet of things.

Drupal Module: Restws

Restws is great in RESTful web services and the necessary CRUD action for all Drupal entities, but has no additional web services like SOAP, XML-RPC, etc. It is also not possible to define and configure ‘service endpoints’.

Drupal Module: Services

The Services module can do everything that Restws can do and more. It is a complete toolkit to provide Drupal with web services. It knows Drupal’s node, entity en CRUD system and provides opportunities to create and configure service endpoints yourself. The module also supports multiple interfaces like REST, XMLRPC, JSON, JSON-RPC, SOAP, AMF and more.

It also provides a number of standard features, allowing you to quickly have the standard web services up and running, for example requesting node content details. This can be done within 10 minutes. Specific use cases are obviously requiring more effort, but with all custom needs this Service module is facilitating a large part of the required functions. Such as creating users, creating a node, reset a password, etc.

Drupal Module: Views data sources

This is a module that lets you create endpoints through Views and serve data through that endpoint. And all this without having to code a line. Relevant data can be configured in the View. Note that this is still an alpha version and can be handy for standard lists: for example the last 10 news items.

More complex use cases

But when the query becomes more complex this module is not working satisfactory yet. You will then have to create a custom endpoint in code and code your own queries. But these custom endpoints do let you hook into the Services module, which is facilitating many functions. So there is no need to code a Drupal web service from scratch.

Web services in Drupal 8

Drupal 8 incorporates web services in the Drupal core, so modules will not be needed anymore!

Wrap up

Ok, that's it for now. But since we are currently working enthusiastically on a major Drupal web services project, blogs will follow with specific use cases about this.

-- Cheers!


Source header image

This video is very good resource, thanks Mkorostoff

Ow yeah.., and don't foget to check out other Youtube videos on Drupal web services

Jan 23 2015
Jan 23


2015-03-28 09:00 - 17:00 America/Chicago


Drupalcamp New OrleansJoin us for the second annual Drupalcamp New Orleans on Saturday, March 28, 2015. Visit for more information, to register and to submit a session.

Drupalcamp New Orleans
Saturday, March 28, 2015 - 9 am - 5 pm
Launch Pad
643 Magazine St
New Orleans, LA

Jan 23 2015
Jan 23

In a recent blog post, Drupal 8 co-maintainer Alex Pott highlighted a seismic shift in Drupal that's mostly slipped under the radar. In Drupal 8, he wrote, "sites own their configuration, not modules".

To see why this change is so far-reaching, it's useful to back up a bit and look at where exportable configuration comes from and what's changed.

In Drupal 7, a lot of site configuration (views, rules, and so on) can be exported into files. There are two main use cases for exportable configuration:

  • To share configuration among multiple sites.
  • To move configuration between multiple versions of a single site.

By and large, the two use cases serve different types of users. Sharing configuration among multiple sites is of greatest benefit to smaller, lower resourced groups, who are happy to get the benefits of expertly developed configuration improvements, whether through individual modules or through Drupal distributions. Moving configuration between different instances of the same site fits the workflow of larger and enterprise users, where configuration changes are carefully planned, managed, and staged.

In Drupal 7, both use cases are supported. An exported view, for example, can be shared between multiple sites or between instances of the same site. The Views module will treat it identically in either case.

If a site admin chooses to customize exported configuration in Drupal 7, the customized version is saved into the site database and overrides the module-provided version. Otherwise, though, the site is on a configuration upgrade path. When the site is upgraded to a new release of the module that provided the configuration, it receives any changes that the module author has made--for example, refinements to a view. At any time, a site admin can choose to toss out changes they've made and get the module-provided view--either the one they originally overrode or a new, updated version.

If anything, the multiple site use case was a driving force behind the development and management of configuration exports. The Features module and associated projects - Strongarm, Context, and so on - developed configuration exporting solutions specifically for supporting distributions, in which configuration would be shared and updated among tens or hundreds or thousands of sites. Yes, Features could be and is used for staging changes between instances of a single site; but the first and foremost use case was sharing configuration across sites.

For Drupal 8, however, the entire approach to configuration was rewritten with one use case primarily in mind: staging and deployment. The confiugration system "allows you to deploy a configuration from one environment to another, provided they are the same site."

In Drupal 8, module-provided configuration is imported once and once only--when the module is installed. The assumption is that, from that point onward, the configuration is "owned" by the site. Updated configuration in modules that have already been installed is, by design, ignored. Importing them, as Pott notes, might lead to "a completely new, never-seen-before (on that site) state." Instead, "Fortunately, Drupal 8 does not work this way."

It's indeed a fortunate outcome if you're building an enterprise site and place a premium on locking down and controlling every detail of configuration.

But for most current Drupal sites and for distributions? The benefits are not so clear cut.

On the plus side, much of what previously was unexportable in Drupal core (content types, fields, variables, and so on) is now supported natively. No more heavy handed workarounds in the Features module for so called "faux exportables"--components like user roles, content types, and fields that Drupal 7 core stores only in the database.

But, with Drupal core firmly on the "single site" configuration management side, users wanting to benefit from module-provided configuration updates and developers of distributions may be left fighting core every step of the way.

It's hard not to conclude that Drupal 8 ties configuration management to a (primarily, enterprise-focused) single-site staging model, and in the process, neatly undermines the use cases that largely brought us exported configuration in the first place.

That said, there are emerging initiatives including Configuration Revert that may help. More on those in future posts.

Jan 22 2015
Jan 22

A couple of weeks ago I hacked together a quick proof of concept of editing the same template for using on the client side and the server side with Drupal 8. It looked like this:

Sunday hack. Make #headlessdrupal use #twig for client side templates #drupal #drupaltwig.

— eiriksm (@orkj) January 4, 2015

If you click the link you can see an animated gif of how I edit the Bartik node template and it reflects in a simple single page app. Or one of these hip headless Drupal things, if you want.

So I thought I should do a quick write up on what it took to make it work, what disadvantages comes with it, what does not actually work, and so on. But then I thought to myself. Why not make a theme that incorporates my thoughts in my last post, "Headless Drupal with head fallback". So I ended up making a proof of concept that also is a live demo of a working Drupal 8 theme with the first page request rendered on the server, and the subsequent requests rendered fully client side. They both use the same node template for both full views and the node listing on the front page. So if you are eager and want to see that, this is the link. 

Next, let's take a look at the inner workings:

Part 1: Twig js

Before I even started this, I had heard of twig.js. So my first thought was to just throw the Drupal templates to it, and see what happened. 

Well, some small problems happened.

The first problem was that some of the filters and tags we have in Drupal is not supported out of the box by twig.js. Some of these are probably Drupal specific, and some are extensions that is not supported out of the box. One example is the tag {% trans %} for translating text. But in general, this was not a big problem. Except that I did as I usually do when doing a POC. I just quickly threw together something that worked, resulting for example in that the trans tag just returns the original string. Which obviously is not the intended use for it. But at least now the templates could be rendered. Part one, complete.

Part 2: Enter REST

Next I needed to make sure I could request a node through the REST module, pass it to twig.js and render the same result as Drupal would do server side. This turned out to be the point where I ended up with the worst hacks. You see, ideally I would just have a JSON structure that represents the node, and pass it to twig.js. But there are a couple of obvious problems with that.

Consider this code (following examples are taken from the Bartik theme):

{{ label }}

This is unproblematic. If we have a node.url property and a node.label property on the object we send to twig.js, this would just work out of the box. Neither of these properties are available like that in the default REST response for a node, however, but a couple of assignments later, that problem went away as well.

Now, consider this:

{{ content|without('comment', 'links') }}

Let's start with the filter, "without". Well, at least that should be easy. We just need a filter that will make sure comment and links properties on the node.content object will not be printed here. No problem.

Now to the problem. The content variable here should include all the rendered fields of the node. As was the case of label and url, .content is not actually a property in the REST response either. This makes the default output from the REST module not so usable to us. Because to make it generic we would also have to know what fields to compose together to this .content property, and how to render them. So what then?

I'll just write a module, I thought. As I often do. Make it return more or less the render array, which I can pass directly to twig.js. So I started looking into what this looked like now, in Drupal 8. I started looking at how I could tweak the render array to look more or less like the least amount of data I needed to be able to render the node. I saw that I needed to recurse through the render array 0, 1 or 2 levels deep, depending on the properties. So I would get for example node.content with markup in all its children, but also node.label without children, just the actual title of the node. Which again made me start to hardcode things I did not want in the response, just like I just had started hardcoding things I wanted from the REST response.

So I gave up the module. After all this is just a hacked together POC, so I'll be frank about that part. And I went back to hardcoding it client side instead. Not really the most flexible solution, but at least - part two: complete.

Part 3: Putting the pieces together

Now, this was the easy part. I had a template function that could accept data. I had transformed the REST response into the pieces I needed for the template. The rest was just adding a couple of AJAX calls and some pushState for the history (which reminds me. This probably does not work in all browsers at all). And then bundling things together with some well known front-end tools. Of course, this is all in the repo if you want all the details.


Twig on the server and on the client. Enough said, right? 

Well. The form this demo is now, this is not something you would just start to use. But hopefully get some ideas. Or inspiration. Or maybe inspire (and inform) me of the smartest way to return a "half-rendered render array".

Also, I would love to get some discussion going regarding how to use this approach in the most maintainable way.

Some thoughts on how I would improve this if I would actually use it:

  • Request templates via ajax.
  • Improve escaping.
  • Incorporate it into a framework (right now it just vanilla js).
  • Remove hacks, actually implement all the filters.

Finally: The code is up at github. There is a demo on a test site on pantheon. And huge props just mostly go out to both twig and twig js authors. Just another day standing on the shoulders of giants.

I'm going to end this blog post with a classy gif from back in the day. And although it does not apply in the same way these gifs were traditionally used, I think we can say that things said in this blog post are not set in stone, neither in regards to construction or architectural planning.

Jan 22 2015
Jan 22

Berkeley approached us to not only build a website for an exciting new project but to also develop its brand identity from scratch.

The project was the Berkeley Institute for Data Science (BIDS), a new initiative to provide a common collaborative space for research fellows, faculty and anyone at Berkeley working with data science in some way.

The White House hosted an event to announce the initiative, which is funded by a $37.8 million grant from the Gordon and Betty Moore Foundation and the Alfred P. Sloan Foundation. Berkeley is one of three institutions to receive this funding, in addition to New York University and the University of Washington.

Now that the site is live, I’d like to share some of the processes I used to develop the identity and site design.

BIDS Final Logo

BIDS Homepage

Mood Boards

I started with some exploration. What is big data? What does data science look like? How far can we push a visual motif that implies big data without being too literal or narrow in focus?

With some smart keyword guessing I hunted down images from Designspiration and Google Image searches, and used Niice to collect a moodboard of related reference images. This got me familiar with some data visualization techniques and started turning the gears in my brain.

BIDS Niice Moodboard

Time to sketch! I like to limit my creative output to paper early in the process to explore more ideas quickly. I don’t expect much from the sketches at this point; I tried a lot of different things and hunted for visual clues that could lead me in new directions.

BIDS Sketches

Getting Digital

Sketching yielded a few interesting ideas; I was ready to digitize some of them and start to explore typography in Illustrator. This included some pretty rough options (shown below) that I never presented to the client, and a handful of acceptable options that I refined into our first deliverable.

BIDS Design Exploration 1

BIDS Design Exploration 2

BIDS Design Exploration 3


I boiled these rough explorations down to three distinct design directions. I had a good feeling about using a dot-and-line motif in some form. Here are a few of the highlights from the first round:

BIDS Logo Round 1


From here we were able to narrow to two possible directions for refinement:

BIDS Round 2

We also needed to account for an acronym version of the logo to be used in some situations:

BIDS Acronym Logos

The client selected the second option with concentric, curved lines. For the final logo we moved the lower dot to the bottom-right to function as visual punctuation. We were already doing this in the acronym version and it kept things a little cleaner:

BIDS Logo Final

Honoring the Brand

I knew we would need to comply  with Berkeley’s brand guidelines, which are conveniently available on this public website. However, the guidelines provided a lot of flexibility in both color and typography, which gave me opportunity to explore.

I used Freight Sans Pro for both the logo and throughout the entire site design. Some of our early concepts included Freight Text Pro (a serifed typeface), but we opted for the more contemporary feel of all sans-serif typography.

BIDS Homepage

The core color palette was lifted directly from the brand guidelines, with some minor modifications made for hover states or secondary UI elements.

Fortunately, the brand guidelines are well designed and already very functional for use on the web! This made it easy to design a site that both matches the Berkeley identity and has a unique identity.

BIDS About Page

BIDS Project Page

Jan 22 2015
Jan 22

I was hired by the Drupal Association in October 2014 to develop a new revenue stream from advertising on For some time we’ve been trying to diversify revenue streams away from DrupalCon, both to make the Association more sustainable and to ensure that DrupalCons can serve community needs, not just our funding needs. We’ve introduced the Drupal Jobs program already and now, after conversations with the community, we want to put more work into advertising initiatives.

This new revenue stream will help fund various initiatives and improvements including better account creation and login, organization and user profile improvements, a responsive redesign of, issue workflow and Git improvements, making search usable, improving tools to find and select projects, and the Groups migration to Drupal 7.

We spent time interviewing members of the Drupal Association board, representatives of the Drupal Community, Working Groups, Supporting Partners, and Drupal Businesses, both large and small to help develop our strategy and guidelines. Our biggest takeaways are:

  • Advertising should not only appeal to advertisers, but also be helpful to our users and/or our mission.
  • When possible, only monetize users who are logged out and not contributing to the Project. If you’re on to do work and contribute, we don’t want you to see ads.
  • Don’t clutter the site, interfere with navigation or disrupt visitors, especially contributors.
  • Do not put ads on pages where users are coming to work, like the issue queue.
  • Advertising products should be inclusive, with low cost options and tiered pricing. We want to make sure that small businesses without huge marketing budgets have the opportunity to get in front of the Drupal Community.
  • Create high impact opportunities for Partners that already support the Community.
  • Address the industry-wide shift to Programmatic Advertising, which is the automated buying and selling of digital advertising.

There are already advertising banners on, however we need to expand their reach to hit our goals. We’re trying to address challenges for our current advertisers, including a relatively low amount of views on pages with ads, which makes it difficult for them to reach their goals.

We’re also facing industry-wide challenges in Digital Advertising. Advertisers are looking for larger, more intrusive ads that get the users’ attention, or at the very least use standard Interactive Advertising Bureau (IAB) ad sizes, which are larger than the ads we offer on

We came up with a new line of products that we feel will help us reach our goals, but not disrupt the experience, or the Drupal Association Engineering Team roadmap. We want our Engineering Team to fix search on, not spend time developing and supporting major advertising platforms.

2015 Advertising Initiatives:

  • The ongoing development of curated content with banner ads including resource guides, content by industry and in the future, blog posts.
  • Continued display of banner ads on high profile pages like the Homepage, Marketplace and Case Studies Section.
  • Sponsored listings from Supporting Technology Partners (similar to Hosting Listings).
  • Opt-in email subscriptions with special offers from our Supporters.
  • Audience Extension: a secure, anonymous, non-interruptive way to advertise to visitors. It allows advertisers to programmatically reach the audience while on other websites through Ad Networks and Exchanges.

I wanted to spend most of my time explaining Audience Extension, since its unlike anything we’ve done in the past, and it may prompt questions. This product makes sense because it addresses all of the challenges we’re facing:

  • It’s affordable for small businesses; they can spend as little as $200 on a campaign
  • We don’t need to flood the site with ads and disrupt the user experience.
  • It’s relatively easy to implement - we won’t interrupt the engineering team or their efforts to improve
  • We will only target anonymous (logged out) users.
  • We will support “Do Not Track” browser requests.
  • This is an industry-wide standard that we’re adopting.
  • Anonymous users will have the option to opt-out.
  • This improves the ad experience on other sites with more relevant, useful ads that also support the community.

How does Audience Extension Work?

We’re partnering with Perfect Audience, a company that specializes in retargeting, and offers a unique audience extension solution called Partner Connect.  We add a Perfect Audience JavaScript tag to the source code. This tag will be loaded on the page to logged out users. The tag places a Perfect Audience cookie in the visitor's browser that indicates that they recently visited Once that cookie is in place, an advertiser looking to reach out to the community can advertise to those visitors on Facebook, Google's ad network, and many other sites that participate in major online ad networks. Advertisers create and manage these campaigns through their Perfect Audience accounts. They pay for the ads through Perfect Audience and we split the revenue with Perfect Audience and the ad networks that serve the ads.

  • The program is anonymous. No personally identifiable information (such as email address, name or date of birth) is gathered or stored.
  • No data is sold or exchanged, this merely gives advertisers the opportunity to buy a banner ad impression within the Perfect Audience platform.
  • It's easy to opt-out. You can just click over to the Perfect Audience privacy page and click two buttons to opt out of the tracking. Here's the link.
  • will support “Do Not Track” browser requests and only users who have not logged in (anonymous) will be included in the program.
  • It does not conflict with EU privacy rulings. Advertiser campaigns for Partner Connect can only be geotargeted to the United States and Canada right now.
  • Only high quality, relevant advertisers who have been vetted by an actual human will be able to participate in this program. Some good examples of Perfect Audience advertisers would be companies like New Relic and Heroku.
  • Perfect Audience is actually run by a Drupaler! The first business started by founder Brad Flora back in 2008 was built on Drupal. He spent countless hours in the IRC channel talking Drupal and posting in the forums. He understands how important it is to keep sensitive pages on an ad-free experience and he’s very excited to be able to help make that happen.
  • This program has the potential to generate significant revenue for the Drupal Association and Project over time as more advertisers come on board.

It’s important that we fund improvements, and that we do so in a responsible way that respects the community. We anticipate rolling out these new products throughout the year, starting with Audience Extension on February 5th.  Thanks for taking the time to read about our initiatives, and please tell us your thoughts!

Jan 22 2015
Jan 22

On January 31 and February 1 the 15th edition of the FOSDEM event will be held in Brussels, Belgium. FOSDEM (Free and Open Source Software Developers’ European Meeting) is the largest gathering of open source community members in Europe. More than 5000 people will come from all parts of the world to meet, share ideas and collaborate.

As the name says, the event is highly developer centric, so the main focus has always been on the technology and the code. But open source software has graphical user interfaces too! Buttons to click, sliders to drag, forms to fill out, boxes to check, screens to swipe and what have you.

Useful software attracts users. To keep them around and attract more users, the useful has to be made usable. Which means uncovering and prioritising user goals and needs and doing the work to find out how to best serve those. That’s where design comes in.

This year FOSDEM will have it’s first ever “devroom” dedicated to the topic of open source design. User experience architects, interaction designers, information architects, usability specialists and designer/coder unicorns will share experiences and discuss the good and bad of design in open source environments.

Open source software is a driving force behind all things online. As more aspects of business, culture, society, humanity as a whole move into the digital domain it becomes just as more important to ensure that people don’t get left behind because of the sheer complexity of it all. There’s a lot that the craft of design can contribute to ensure this.

I’ll deliver a short talk about how we started, grew and maintain a user experience design team within the Drupal project. Otherwise, the schedule is looking great. I’m looking forward to meet my open source designer colleagues.

See you there?

Jan 22 2015
Jan 22

A somewhat common request for projects, especially intranets, is to provide a single sign-on (SSO) system. At a rudimentary level, an SSO system allows one site to handle all logins (authentication) for a group of sites, rather than a visitor needing a separate login for each one. For an organization with several sites it can greatly reduce the headache for its clients, customers, employees, etc increase visitor satisfaction, reduce maintenance costs, and potentially increase sales.

Another common use of an SSO system is to transparently log in visitors who are also logged into a local network where a directory service is being used to manage access across the network, e.g. the open LDAP standard, Microsoft’s Active Directory, Novell Open Enterprise Server, etc. This is most often used for local network sites and services where a level of physical security is assumed, i.e. the only people on the network are supposed to be there.

Digging deeper

At a technical level, most SSO services start with the visitor logging into one central service, the authentication site/service. After that, the visitor can then browse to other sites and services and are transparently logged in. When the visitor does connect to one of the other sites, the new site radios back to the central server to confirm the authentication. In practice this means that the visitor may be bounced back ‘n forth between a series of pages in order for the authentication to be confirmed. More commonly, though, the confirmation is handled behind the scenes on the actual servers themselves, thus greatly reducing the chance of foul play.

Either which way, these are well known and understood problems that have been solved several times before. Because of this there are many known solutions for adding SSO to a site or a group of sites. Some options for connecting to an SSO system as a client, including LDAP, SAML, CAS and OAuth, already have stable modules available for Drupal. Should there be a need to connect multiple Drupal sites together there's even a custom solution available called the Bakery module, which has been in use on itself for many years and serves hundreds of thousands of users across its variety of sites.

And here’s one I made earlier

Because of all this, it rarely makes sense to write a custom solution; instead, an existing solution should be sought that can fit both the server and secondary / client side of the equation. When researching SSO options the most important aspect to research is what existing options are already available on the central login server, maybe as an optional extra - an existing but unused system may already exist that could save tens or even hundreds of hours of development time.

Using an existing single sign-on system provides a wealth of benefits:

  • A greatly reduced amount of custom code, thus less code to manage.
  • Publicly available code results in much higher security standards as anyone can audit the code.
  • Known APIs result in easier integration and easier maintenance than custom code.
  • Known APIs increase the likelihood of being able to find someone with experience in working with the API, rather than having to start from scratch.
  • Once the pieces are in place, it’ll usually Just Work™.

.. But not the custom module!

Writing a custom SSO solution comes with many disadvantages:

  • SSO systems have been built before, there are lots of options out there.
  • More custom code for the site maintainers to maintain.
  • It requires architecting a custom security algorithm to match the specific requirements.
  • Without paying for a 3rd party service, there's no "automatic" vetting of the security algorithm outside of the immediate development team, thus a greater chance of security holes existing.
  • They result in more time (and budget) spent writing solutions that already exist.
  • More time (and budget) is then also spent maintaining the custom code in the future.

Hot! Code slowly!

That said, there can be a few (very few!) reasons why a custom SSO solution is required:

  • There may not be an existing SSO system available for the systems that are being connected.
  • One or more portions of the system may be behind custom firewalls that cannot be gotten past, which would stop the server-side account confirmation.
  • Existing systems may not support unusual custom requirements that are outside of the project's control, e.g. integration with physical card or thumb readers that are mandated for use, etc.

If a project does end up needing to require a custom SSO solution, a few things should be kept in mind:

  • See if there's any way to use an existing codebase and just write a plugin to handle the unique requirements; this would reduce the amount of custom code that would need to be written and maintained.
  • As this opens up the castle's front gate to invaders, so to speak, secure code is of paramount importance, so make sure the code follows the Drupal security best practices.
  • Include a timestamp in the algorithm and necessary logic to ensure that there’s an automatic timeout / expiration of the SSO; this will help avoid scenarios of someone using a link from a cached page on someone else’s computer.
  • Ensure that all traffic is secured via HTTPS; while not perfect (the IT world is constantly uncovering details that show how it can be gotten past), it still adds a reasonable base layer of security that’s much more difficult for someone (e.g. at a coffee shop) to snoop than an unsecured connection.
  • Have both the authentication logic / algorithm and the code itself vetted by a 3rd party.
  • Listen to all feedback regarding the system's security and make every possible effort to remove potential avenues of attack.
  • Do not attempt to write new encryption algorithms, there are plenty of highly secure algorithms supported by PHP's mcrypt library that will work with plenty of other systems.
  • Use the strongest encryption algorithms supported by each platform, don't skimp on something this important, especially when there would be negligible difference in terms of system / site responsiveness for the end user.

Don't design a new mouse trap

In summary – known and trusted solutions exist for adding a single sign-on system to a website and writing a custom system should always be the last resort.

Additional Resources

Best Practices for Custom Modules | Mediacurrent Blog Post
Introducing the Mediacurrent Contrib Committee | Mediacurrent Blog Post

Jan 22 2015
Jan 22

I've been privileged to attend almost every DrupalCon since Barcelona in 2007. I missed Paris in 2009, but I had a good excuse - my wife was due to give birth to our first child around the same time.

The relocation of the Commerce Guys headquarters to Paris has given me plenty of time to catch up on the missed sightseeing, but I still need to figure out how to get to Sydney after missing that one. Lol

Without access to those hundreds of Drupal developers and enthusiasts in 2007, I never would have known anyone was even using Ubercart. I didn't know how to engage other developers remotely (my early forays into IRC were similar to webchick's, I believe), and there wasn't much going on in Louisville, KY where I called home. Meeting others in the Drupal community, learning from my peers, and being mentored directly by many of the same has grown me personally and professionally in ways I never would have expected.

That's why I'm excited about the opportunity to travel to Bogotá, Colombia for the first DrupalCon in Latin America, February 10-12. I can't wait to hear the keynotes from both Dries and Larry, two of my Drupal heroes, and to learn more about the latest developments in Drupal 8 core and contributed modules.

I'll personally be addressing two topics: Drupal Commerce 2.x for Drupal 8 (on behalf of bojanz) and growing a Drupal based product business. I also look forward to the conversations, shared meals, and sprints that make the conference so rewarding.

I strongly encourage you to come if you're in a position to do so! Smile

With the help of Carlos Ospina, I've recorded a personal invitation in Spanish that I trust doesn't have me saying anything embarrassing. I'm sure my Spanish will be better for at least a week after spending time at the conference. Tongue

Jan 22 2015
Jan 22

Shifting to a content-driven commerce focus is a daunting challenge.

Whether you are a media company adding commerce to your site or a retail site wanting to add richer editorial, there are very different skillsets required to sell product versus those needed for writing and curating content. How do you successfully blend these skillsets — much less these seemingly disparate websites — into a single, cohesive whole?

It ain’t easy, but it’s worth it.

From Media to Commerce

Adding commerce to a media site is tricky. On the one hand, product recommendations can add a new dimension of value to both you and your readers. Just like advertising, though, (and maybe more so), you run the risk of corrupting a brand that your readers have come to trust.


If you are making the step into content-driven commerce, you must be willing to promote products on your site. Sounds like a no-brainer, right? But integrity is one of the things that readers value from media sites. And if they feel like they are being pushed toward a bad product (or even an unrelated product), they will likely revolt.

Now, the promotions don’t have to be in your face, “everything must go”, car-sales promotions. In fact, those are the exact promotions that will spark revolution. But you must be willing to add tasteful product descriptions and honest reviews and recommendations. This means putting your trusted brand behind a product that you like — and, more importantly, one that you think your readers will like.

Not selling out

There is a fine line between promoting product and selling out. Sometimes it’s easy to find. Don’t like a product? Think a product is cheaply made? Don’t recommend it no matter how sweet that affiliate commission looks.

But what about a product you love versus one that you like? The one you love, right? But what if that second product has a much better affiliate program?

It’s tricky. But you can probably find a way to promote both. The Wirecutter (and their sister site, The SweetHome) approach to product reviews is a great example of this. They write in-depth product reviews for different categories of gadgets. Each review has a recommended product along with explanations of why they did and didn’t like some of the other options they reviewed. Each product is a link to Amazon (and other stores) and every link has their affiliate code.

It’s a smart, if intense, solution that allows them to promote a lot of different products without selling out. In fact, it’s quite the opposite. Readers trust the site more because they go into so much detail about so many options.

From Commerce to Media

Now, if you are going in the opposite direction (adding content to your commerce site), then you’ll experience a range of other issues that can be even more challenging. In many respects, they run counter to much of the marketing culture that permeates most retail shops — unless those shops have come to value content-marketing and storytelling as a way to increase online sales.

Content Production

Editorial content is a whole new world. Marketing content goes through a series of edits and reviews. It’s often bland and boring. Intentionally so. You need to put the best foot forward of every product you sell — no matter how much that description might gloss over hard truths.

With a content-driven commerce approach, though, using your marketing-style for your editorial content will sabotage your efforts. You need something with a voice and style that captures people’s attention and engages them on a personal level. Something that product descriptions almost never do.

Willingness to curate

Once you start producing content, you need to start curating it. What products are going to make it onto your top 10 list? Which set of widgets are you going to include in your how-to article? You know those items you promote are going to get more views and more clicks — even a bump in brand perception — that other products won’t.

After you’ve written the piece, then you need to decide what content you’re going to promote on the homepage and throughout the site. Another tough decision. This one, though, fits closely inline with your sales planning process — which sale are promoting and when.

Treating content as a first-class citizen

Another aspect of content-driven commerce that may seem anathema to many commerce sites: treat your content like a first-class citizen. Specifically: give it equal weight on your homepage, which means treating it the same as you would a sale or other promotion. The challenge for many is that this feels like you are losing sales. But you’re trading a bump in short-term sales for long-term engagement.

There are many companies that have seemingly embraced content-driven commerce as a strategy. Big brands like Home Depot, Lowes, and Brooks Brothers are producing some amazing content. A quick glance at their homepages, though, and the only hint at this content is behind a single link. Everything on these pages is focused on the latest sale and other product promotions. This may be a strategic decision or a technological limitation. Regardless, these websites have yet to really embrace content as a cornerstone to their brand.

Admittedly, there are many ways to enter a website—from Google to social media. But what a company includes on their homepage speaks volumes about what a brand values.

Gaining trust

Does your audience see you as an expert on the product you sell (Crutchfield)? Or just as a fancy storefront (Best Buy). In either case, gaining and maintaining the trust of your audience is critical — and, depending on your current relationship with your customers, may be an uphill slog.

Are you willing to write a bad review of a product? Are you willing to pull a product if there are no redeeming qualities? Are you willing to write content that doesn’t directly sell the product?

Imagine if Best Buy started producing content that actually helped their audience better understand and use the technology they were selling. As it is, the store (and by extension, website) has limited audience engagement and does nothing to pull anyone to their site — other than offer product promotions and discounts.

One of the fundamental requirements to succeeding with any kind of content-driven strategy is audience trust. You need to build trust with your audience and you can’t do that if they feel like you are selling them anything and everything.

The move to content-driven commerce

Making the decision to integrate content and commerce has its challenges. The exact challenges you face will really depend on the culture of your organization as well as the abilities and mindset of your staff. But if you’re willing to make the necessary changes to engage your audience and build their trust, you can make the transition.

If you’re moving from a media site into commerce, they key will be maintaining your readers' trust and your own integrity. If you’re moving in the opposite direction, the challenge will be gaining the reader’s trust, which means making some pretty big organizational and cultural changes.

In both cases, though, you’ll find the move well worth the effort.

Jan 22 2015
Jan 22

By Steve Burge 21 January 2015

One of our members was watching our video class on Drupal's Workbench module and has been getting it set up on their site.

They ran into one problem: how to use the new moderation states they added.

They wanted add a tab so that people could easily see the content in a particular moderation state.

In this tutorial, we'll show you how to make that happen.

Note: we are going to assume some knowledge of Workbench. If you're new to or struggling with Workbench, watch our video class.

First, if you don't have one, set up a new state that content can be assigned to.

  • Go Configuration > Workbench Access
  • Add a new state. In this example, we added "Final Editor Approval":
Add a New Moderation State Tab to Workbench
  • Go to Structure > Views
  • Find the "Workbench Moderation: Content" view. This is the view that creates an default tab for the "Needs Review" state.
  • Click Clone:
  • Click Continue.
  • Now, we need to create a custom menu link for our "Final Editor Approval" state. Click "Tab: Needs review"/
  • Change the title of the menu link:

Now we need to modify the content that is being shown when people click this menu link.

  • Under Filter Criteria, click "Workbench Moderation: State (=Needs Review).
  • Choose the moderation state that you created earlier:
  • Save your view.
  • Go to your Workbench dashboard and you'll see your new tab.
Jan 21 2015
Jan 21

We're very pleased to announce that the new Drupal Console project is now multilingual!

We put a lot of hours into this effort because we felt it was so important to broaden the base of Console users and help them get off to a great start developing modules for Drupal 8. It will be the last major feature to be added before an upcoming code freeze that will allow us to work on Console documentation - another effort to broaden the base of users that can benefit from the project.

Here are a few reasons why we felt it was so important to add multilingual capabilities to the Console project:

  • Drupal is multilingual and Drupal 8 is even more so than ever. Take a look at D8MI at  We want to ship this project with capabilities like these.
  • It feels good and more natural to use a tool on your mother tongue. Most of our project contributors are not native english speakers.
  • Separating messages from code will ease the text messages updates, no need to know/learn PHP or use an IDE to contribute.
  • David Flores & myself will be presenting a sesion in Spanish related to this project at the DrupalCon Latino in Bogota and we knew making the project multilingual would be interesting for the event audience.

As I mentioned on twitter:

Code looks a little hacky but got a translatable version of #Drupal Console commands, feature will be available on the next release. #drupal8

— Jesus Manuel Olivas (@jmolivas) January 2, 2015

But we needed a starting point.

Talking about code, this is what was required 

Adding the Symfony Translation Component to the composer.json file

"require": {
+   "symfony/config": "2.6.*",
+   "symfony/translation": "2.6.*",

For more information about the Translation Component look the awesome symfony documentation here.

Add translation files and messages

# extract of config/translations/console.en.yml
      description: Rebuild and clear all site caches.
        cache: Only clean a specific cache.
        welcome: Welcome to the cache:rebuild command.
        rebuild: Rebuilding cache(s), wait a moment please.
        completed: Done cleaning cache(s).
        invalid_cache: Cache "%s" is invalid.
        cache: Select cache.

Actually four language files are available (en, es, fr and pt) you can find those files here, take note those files are only a copy of the console.en.yml file with few overrides for testing purposes.

Create a new Helper class

In order to take care of the translation the TranslatorHelper class was added see code here, the Helper was also registered at bin/conosle.php see code here

Inject the TranslatorHelper

For this task it was necessary to modify the RegisterCommandsHelper class, obtaining the TranslatorHelper and Injecting via the constructor when creating and registering a new instance of each command.

if ($cmd->getConstructor()->getNumberOfRequiredParameters()>0) {
  $translator = $this->getHelperSet()->get('translator');
  $command = $cmd->newInstance($translator);
else {
  $command = $cmd->newInstance();

You can see the full class here

How can you help

Feel free to take a look at the messages at the github repo and send us fixes.

How to override the default language

As simple as creating a new YAML file at your home directory ~/.console/config.yml and override the language value.

#file path ~/.console/config.yml
  language: es

How to make a console.phar

We are using and recommend this great project

$ curl -LSs | php
$ mv box.phar /usr/local/bin/box

// Run this inside your project directory to create a new console.phar file
$ box build

Feel free to try this new multilingual feature on the latest release v0.6.0, and as usual feel free to ask any questions commenting on this page, or adding a new issue on the drupal project page or the github repository.

I mentioned earlier that we are moving toward a code freeze so we can focus on documentation. The freeze is expected to be in place for about 4 weeks. I'll also use the time to prepare my DrupalCon Latino  Console presentation with David Flores aka @dmouse.

Stay tuned!

This post has been adapted from my personal blog.

Jan 21 2015
Jan 21

This tutorial will showcase how we have made Bootstrap 3 and especially its responsive grid system and integral part of the platform, and will show you how to use some easy tools to make any website component or content mobile friendly!

About Bootstrap 3 in CMS Powerstart

The Drupal CMS Powerstart disbitrution has made Bootstrap 3 an integral part of the platform. The main reason we did this is to leverage the Bootstrap 3 responsive grid system. This grid system is not just functional, practical and effective.. it's also widely used, widely understood and very well documented. On top of that, Bootstrap 3 is an active open source project, like Drupal, and also supported very well with Drupal through a basetheme and various modules. This tutorial will teach you about these integrations and how to use them to create awesome responsive websites with ease. This tutorial will focus more on Drupal integration than on the gridsystem itself. For a quick introduction to the grid system check out this tutorial. For real life examples check out our Drupal themes.

2.1 Bootstrap on blocks

Forget about themes with 16 regions, or 25 regions. If your'e using Bootstrap you really only need full-width regions that stack on top of one another. The horizontal division will be provisioned by block classes, with responsive layout switching that is customized for your content, not for your theme (-designer) or for an outdated wireframe.

In Drupal CMS Powerstart I added the block_class module and added a patch that assists in our responsive designing labours by auto-completing the Bootstrap 3 grid system classes. 

2.2 Bootstrap in Views

To use Bootstrap 3 in views we will use the views_bootstrap Drupal module. Let's take a look at how this module is used to create a Portfolio grid page for theDrupal CMS Powerstart Portfolio component.

Live demo of portfolio grid.

The views_bootstrap module provides an array of new Views display plugins:

  • Bootstrap Accordion
  • Bootstrap Carousel
  • Bootstrap Grid
  • Bootstrap List Group
  • Bootstrap Media Object
  • Bootstrap Tab
  • Bootstrap Table
  • Bootstrap Thumbnails 

This grid of portfolio thumbnails uses the Bootstrap Grid views display plugin. The Bootstrap Grid plugin allows you to output any content in a grid using Bootstrap's grid html markup. A current shortcoming in the module is that it only allows you to select the number of columns for the 'large' media query. Fortunately, there is a patch for that:

The Drupal CMS Powerstart distribution has this patch included and uses it in views to create truly responsive grids, where you can set the number of columns per media query. It works quiet well out of the box. Here is the views format configuration used for the portfolio:

As you can see it's real easy to create responsive views with this Views Bootstrap 3 integration! Without writing any code you can leverage the tried and tested responsive systems that are provided by Bootstrap. The views_bootstrap module gives you a whole set of tools that help you build responsive layouts and widgets using your trusted Views backend interface. This means site builders can rely less on themers/programmers and get work done quicker.

Using custom markup in views

The View Bootstrap module is great at organizing rows of data into responsive layouts, but it doesn't have the same level of support for fields inside a row of data. This is what we did to create a responsive events listing for the Drupal CMS Powerstart events component:

Live demo of events view.

The events view uses the 'Unformatted list' plugin that is provided by the views module itself. This prints each row of data in a div container. There are 2 ways to make the contents of these rows responsive. One would be to generate node teasers inside the rows, and configure the content type's teaser display mode to use grid classes on the fields. This method will be covered in the next part of this tutorial. For the events view we don't use teasers, we are building a fields view because it gives us more flexibility in the fields we show in our view. Luckily the views interface makes it easy for us to add grid classes right where we need them. First, we will add a row class to each views row by clicking Settings under Format and adding row in the Row class field:

Now we can add responsive column classes to our fields and they will be organized within each row. We simply add classes by clicking each field and editing the Style Settings CSS class field:

The only thing we need to do here is check the Create a CSS class checkbox, and a textbox will appear that allows us to add grid classes to the field. This field uses the class col-sm-6, which makes our event title use 50% of its parent container's width (because Bootstrap uses a 12 column grid) when on a small device. This means that on an extra small device there is not grid class active and the title will use 100% of it's parent container's width, as you can see in the mock-up above. We can't say this method is as easy as the point and click method discussed earlier but if you are familiar with the views interface already this method will become intuitive with a little bit of practice and will allow you to have very fine-grained control over responsive behaviors in your views.

2.3 Bootstrap in Fields

Often you want to organise content fields in a layout. A module that can be of help here is Display Suite, but even with the ds_bootstrap_layouts extension this will give you a limited set of layouts. We can easily build any layout by simply adding bootstral grid classes on fields. This is not to say I don't like Display Suite but since CMS Powerstart focuses on simplicity I will choose the simplest solution. 'Big' tools like Panels and Display Suite are definitely more appropriate for larger Drupal projects.

To make an example I will start building a new Drupal CMS Powerstart component. There was a feature request for a 'shop' component, so we will be building a content type as part of a simple component that will help brick and mortar shops display their inventory. First we will create a new content type called Object.  Since Bootstrap columns need to be wrapped in row classes, we are adding the field_group module. Once you have downloaded and enabled the field_group module, you will have a new option 'Add new group' under the manage fields tab of your Object content type. We are adding a group called Bootstrap row using the default widget fieldset. Now drag the image and body field to the indented position under the Bootstrap row field group. This will create a visual indication in the node/add and node/edit interface that fields belong to the same group. Your Manage Fields interface should now look like this:

Next we will go to the Manage Display tab of the Object content type. This is where the Bootstrap magic happens. Our goal is to display the body text and image field beside eachother on big device and above one another in small devices. First, we have to create our Bootstrap row group again, this time we add a group named Bootstrap row and make it the 'Div' format. Give our field group the following configuration settings:

  • Fieldgroup settings: open
  • Show label: no
  • Speed: none
  • Effec none:
  • Extra CSS classes: row (you can remove the default classes)

Next we wil drag the Body and Image fields to the indented position under the field group. Now we simply configure the field formatters to use the Bootstrap grid classes of our choice. To add these classes in the Manage Display interface we are going to install another module: field_formatter_class. Once you have downloaded and enabled this module you can go back to the Manage Display interface and you will see an option to add a class on each field. You will now set both the Body and Image field to have the Field Formatter Class col-sm-6. This will create a 2 column layout on devices wider than 768px and a stacked layout on smaller devices. If you are using Drupal CMS Powerstart, you can set the Image style of your image field to Bootstrap 3 col6. This will resize the image to exactly fit the 6 column grid container.

Your Manage Display tab should now look like this: 

Now if you create a node using your new content type it should look similar to this:

Using our new fieldgroup tool we can easily add bootstrap rows and columns to any content type, and since classes are listed and edited in the Manage Fields interface, it's relatively quick and and easy to manage per-node layouts. At least it's a step up from managing a ton of node templates.

2.4 Bootstrap in Content: Shortcodes

Sometimes you (or a client) just want to create a special page that needs more attention than other pages of the same type. Unfortunately there aren't any free tools that give our clients a true WYSIWYG experience for creating responsive Bootstrap grids. If you know one please let me know! Our fallback option is the bs_shortcodes module that I ported from a Wordpres plugin. This module let's you add nearly all Bootstrap components, including grid elements, using a WYSIWYG-integrated form. 

To see the power and flexibility of what you can do with these shortcode check out this demo page:

This system leverages the Drupal Shortcode API, which is a port of the Wordpress shortcode API. The Drupal CMS Powerstart distribution ships with a WYWISYG component that includes CKEditor 4 with the neccesary Shortcode API and shortcode-provisioning submodules. Since the configuration of this setup is complex and beyond the scope of this article I'm just going to assuming you are using Drupal CMS Powerstart and ready to use the WYSIWYG with Shortcodes integration.

To create a simple 2 column layout like in the previous examples we first add a row shortcode:

Then we select the column shortcode and find the code that corresponds to 6 columns on a small devices:

Now if we use 2 6 column shortcodes and put in the same content used in the Field and Field Group tutorial in will look like this in the editor:

After saving the page it will look exactly as the Test Object page we created in the previous tutorial. I admit that shortcodes are a rather crude tool for a complex problem but anyone who is willing the learn the basic principles of a 12 column grid system will have a huge amount of flexibility and capability in creating responsive content. When you combine the Bootstrap 3 grid documentation, the WYSIWYG integration, and for emergencies the documentation of the Wordpress plugin you already have a fully documented tool for savvy clients who don't want to deal with raw HTML code. Shortcodes don't seem like the most userfriendly tool but I've seen clients pick it up quickly and appreciate the flexibility it gives them in organising their most important pages. In the future we migh see improvement in this area from tools like Visual Composer and the Drupal-compatible alternative Azexo Composer.

In Part 3 of this tutorial series I will write about using shortcodes as a site building tool and demonstrate what you can do with shortcodes in a real life Drupal CMS project. To get a sneak preview of the shortcode elements I will be using, check out our Drupal themes.

Jan 21 2015
Jan 21

It isn't just about Drupal here at ActiveLAMP -- when the right project comes along that diverges from the usual demands of content management, we get to use other cool technologies to satisfy more exotic requirements. Last year we had a project that presented us with the opportunity to broaden our arsenal beyond the Drupal toolbox. Basically, we had to build a website which handles a growing amount of vetted content coming in from the site's community and 2 external sources, and the whole catalog is available through the use of a rich search tool and also through a RESTful web service which other of our client's partners can use to search for content to display on their respective websites.

Drupal 7 -- more than just a CMS

We love Drupal and we recognize its power in managing content of varying types and complexity. We at ActiveLAMP have solved a lot of problems with it in the past, and have seen how potent it can be. We were able to map out many of the project's requirements to Drupal functionality and we grew confident that it is the right tool for the project.

We pretty much implemented the majority of the site's content-management, user-management, and access-control functionality with Drupal, from content creation, revision, display, and for printing. We relied heavily on built-in functionality to tie things together. Did I mention that the site and content-base and theme components are bi-lingual? Yeah, the wide foray of i18n modules took care of that.

One huge reason we love Drupal is because of its striving community which drives to make it better and more powerful every day. We leveraged open-sourced modules that the community has produced over the years to satisfy project requirements that Drupal does not provide out-of-the-box.

For starters, we based our project on the Panopoly distribution of Drupal which bundles a wide selection of modules that gave us great flexibility in structuring our pages and saving us precious time in site-building and theming. We leveraged a lot of modules to solve more specialized problems. For example, we used the Workbench suite of modules to take care of the implementation of the review-publish-reject workflow that was essential to maintain the site's integrity and quality. We also used the ZURB Foundation starter theme as the foundation for our site pages.

What vanilla Drupal and the community modules cannot provide us we wrote ourselves, thanks to Drupal's uber-powerful "plug-and-play" architecture which easily allowed us to write custom modules to tell Drupal exactly what we need it to do. The amount of work that can be accomplished by the architecture's hook system is phenomenal, and it elevates Drupal from being just a content management system to a content management framework. Whatever your problem, there most probably is a Drupal module for it.

Flexible indexing and searching with Elasticsearch

A large aspect to our project is that the content we handle should be subject to a search tool available on the site. The criterias for searching do not only demand the support for full-text searches, but also filtering by date-range, categorizations ("taxonomies" in Drupal), and most importantly, geo-location queries and sorting by distance (e.g., within n miles from a given location, etc.) It was readily apparent that SQL LIKE expressions or full-text search queries with the MyISAM engine for MySQL just wouldn't cut it.

We needed a full-pledged full-text search engine that also supports geo-spatial operations. And surprise! -- there is a Drupal module for that (A confession: not really a surprise). The Apache Solr Search modules readily provide us the ability to index all our content straight from Drupal and into Apache Solr, an open-source search platform built on top of the famous Apache Lucene engine.

Despite the comfort that the module provided, I evaluated other options which eventually led us to Elasticsearch, which we ended up using over Solr.

Elasticsearch advertises itself as:

“a powerful open source search and analytics engine that makes data easy to explore”

...and we really found this to be true. Since it is basically a wrapper around Lucene and exposing its features through a RESTful API, it is readily available to any apps no matter which language it is written in. Given the wide proliferation and usage of REST APIs in web development, it puts a familiar face on a not-so-common technology. As long as you speak HTTP, the lingua franca of the Web, you are in business.

Writing/indexing documents into Elasticsearch is straight-forward: represent your content as a JSON object and POST it up into the appropriate endpoints. If you wish to retrieve it on its own, simply issue a GET request together with its unique ID which Elasticsearch assigned it and gave back during indexing. Updating it is also a PUT request away. Its all RESTful and nice.

Making searches is also done through API calls, too. Here is an example of a query which contains a Lucene-like text search (grouping conditions with parentheses and ANDs and ORs), a negation filter, a basic geo-location filtering, and with results sorted by distance from a given location:

POST /volsearch/toolkit_opportunity/_search HTTP/1.1
Host: localhost:9200
                "partner":"Mentor Up"
          "query":"hunger AND (financial OR finance)",

Queries are written following Elasticsearch's own DSL (domain-specific language) which are in the form of JSON objects. The fact that queries are represented as tree of search specifications in the form of dictionaries (or “associative arrays” in PHP parlance) makes them a lot easier to understand, traverse, and manipulate as needed without the need of third-party query builders that Lucene's query syntax leaves to be desired. It is this syntactic sugar that helped convinced us to use Elasticsearch.

What makes Elasticsearch flexible is that it is at some degree schema-less. It really made it quite quick for us to get started and get things done. We just hand it with documents with no pre-defined schema and it just does it job at trying to guess the field types, inferring from the data we provided. We can specify new text fields and filter against them on-the-go. If you decide to start using richer queries like geo-spatial and date-ranges, then you should explicitly declare fields as having richer types like dates, date-ranges, and geo-points to tell Elasticsearch how to index the data accordingly.

To be clear, Apache Solr also exposes Lucene through a web service. However we think Elasticsearch API design is more modern and much easier to use. Elasticsearch also provides a suite of features that lends it to easier scalability. Visualizing the data is also really nifty with the use of Kibana.

The Search API

Because of the lack of built-in access control in Elasticsearch, we cannot just expose it to third-parties who wish to consume our data. Anyone who can see the Elasticsearch server will invariably have the ability to write and delete content from it. We needed a layer that firewalls our search index away from public. Not only that, it will also have to enforce our own simplified query DSL that the API consumers will use.

This is another aspect that we looked beyond Drupal. Building web services isn't exactly within Drupal's purview, although it can be accomplished with the help of third-party modules. However, our major concern was in regards to the operational cost of involving it in the web service solution in general: we felt that the overhead of Drupal's bootstrap process is just too much for responding to API requests. It would be akin to swatting a fruit fly with a sledge-hammer. We decided to implement all search functionality and the search API itself in a separate application and writing it with Symfony.

More details on how we introduced Symfony into the equation and how we integrated together will be the subject of my next blog post. For now we just like to say that we are happy with our decision to split the project's scope into smaller discrete sub-problems because it allowed us to target each one of them with more focused solutions and expand our horizon.

Jan 21 2015
Jan 21

The default contact form in Drupal has quite basic settings. You may only create categories and receiving emails with the default UI admin. To change other preferences such as form title or form destination, we may have to implement override hooks.

In this article, we present some tricks to customize the contact form in Drupal. More tricks will be added regularly.

1. Edit the contact form title

To change the title, add this function to template.php on your theme folder (/sites/all/themes/your-theme/template.php)

function mytheme_form_contact_site_form_alter (&$form, &$form_state) {
drupal_set_title ('Contact ABC Media');

2. Redirect form result

By default, users will be redirected to front pages after submitting the form. It has a strange behavior for users because they may confuse what is going on, whether the message has been sent.

To redirect the contact form to a page of your choice, please add these two functions to your template.php file of your theme, as in section 1 above. I learnt it from a tip of Postrational.

function my_theme_form_alter(&$form, &$form_state, $form_id) {
if ($form_id == 'contact_site_form') {
$form['#submit'][] = 'contact_form_submit_handler';
function contact_form_submit_handler(&$form, &$form_state) {
$form_state['redirect'] = 'thank-you-page-alias';

Do you have other tricks with contact forms in Drupal? Pls share and we will post them here with acknowledgement to you.

Jan 21 2015
Jan 21

2015 is poised to be a great year for nonprofit technology and the adoption of digital tools to advance the causes we love. While I can’t say that I see too many groundbreaking innovations on the immediate horizon, I do believe that this will be a year of implementation and refinement. Building upon trends that we saw arise last year in the consumer industry and private sector, 2015 will be the year that many nonprofits leap into digital engagement strategies and begin to leverage new tools that will create fundamental change in the way that they interact with their constituencies.

Of course, as always happens when a growing sector first embraces new tools, the nonprofit technology world will see more than its fair share of awkward clunkiness this year, mainly as "software as a service" product companies rebrand their offerings for nonprofits and flash shiny objects at the earnest and hungry organizations we all support.

But as a more general and appealing trend, I believe that we’ll see a slimming down and a focus on polish this coming year. Visual storytelling and "long form" journalism are hopefully on the rise in the nonprofit digital world. We should see more, and better, integrations between web applications, data management systems, and social networks. These integrations will power more seamless and personalized user experiences. Rather than tossing up an incongruent collection of web interfaces and forms delivered by different paid service platforms, nonprofits will be able to present calls-to-action through more beautiful and less cumbersome digital experiences.

Below are some additional thoughts regarding the good stuff (and some of the bad) that we’re likely to see this year. If you have any additional predictions, please share your thoughts!

Visual Storytelling and the Resurgence of Long-Form Journalism

I don’t know about you, but I’m tired of my eyeballs popping out of my head every time I visit a nonprofit’s homepage that attempts to cram 1,000 headlines above the fold. I’m tired of the concept of "a fold" altogether. And don’t get me started about slideshow carousels as navigation: It’s 2015, folks!

Fortunately, we are seeing an elegant slowdown in the pace of writing for the web. Audiences are getting a little more patient, particularly when presented with clean design, pleasing typography, and bold imagery. We’re also seeing nonprofits embrace visual storytelling, investing in imagery and content over whistles and bells and widgets.

Medium and Exposure are my two favorite examples of impactful long-form journalism and visual storytelling on the web. These deceptively simple sites leverage cutting-edge javascript and other complex technologies to get out of the way and let content and visuals speak for themselves.


As an added benefit, adopting this more long-form storytelling approach may help your SEO. Google took bold steps in late 2014 to reward websites that focus on good content. With its release of Panda 4.1, their new search algorithm, nonprofits who prioritize long-form writing and quality narrative will start to see significant benefits.

We’re already seeing nonprofits adopt this approach, including one of my new favorites, The Marshall Project. This site cuts away the usual frills and assumes an intelligent audience that will do the work to engage with the content. Don’t get me wrong: The Marshall Project website is slick and surprisingly complex from an engineering and user experience perspective – but its designers have worked hard to bring the content itself to the surface as the most compelling call-to-action.


2015 will be a big year for APIs in the CMS space. Teasing out those acronyms, we will see content management systems, like Drupal and WordPress, release powerful tools allowing them to talk with other web applications and tools. Indeed, its new web services layer is a central and much anticipated feature in the upcoming release of Drupal 8. WordPress made similar strides late last year with the early release of its own REST API.


Leveraging these APIs, 2015 will bring the nonprofit sector more mobile applications that share data and content with these organizations’ websites. The costs for developing these integrations should decrease relative to the usefulness of such solutions, which will hopefully lead to more experimentation and mobile investment among nonprofits. And as mentioned previously, because these new applications will have access to more constituent data across platforms, they will lend themselves to more robust and personalized digital experiences.


On the less technical and more DIY front, 2015 will be marked by the maturation of 3rd-party services that allow non-developers to integrate their online tools. In its awesome post about technology trends in 2015, the firm Frog Design refers to this development as the "emergence of the casual programmer." Services like Zapier, and my new favorite IFTTT, will allow nonprofits to make more out of social networks and services like Google Apps, turn disparate data into actionable analytics, see the bigger picture across networks, and make more data-driven decisions.

More Big (And Perhaps Clunky) Web Apps

If you’ve been following ThinkShout for a while now, you probably know that we are big fans of Salesforce because of its great API and commitment to open data. We maintain the Salesforce Integration Suite for Drupal. At this point, the majority of our client work involves some sort of integration between the Drupal CMS and the Salesforce CRM.

As proponents of data-driven constituent engagement, we couldn’t be more excited to see the nonprofit sector embrace Salesforce and recognize the importance of constituent relationship management (CRM) and CRM-CMS integration. Because of the power of the Salesforce Suite, we can build powerful, gorgeous tools in Drupal that sync data bidirectionally and in real time with Salesforce.


That said, part of the rise of Salesforce in the nonprofit sector over the last two years has been driven by the vacuum created by Blackbaud’s purchase of Convio. And now, with the recent releases of Salesforce’s NGO Connect and Blackbaud’s Raiser’s Edge NXT, both "all-in-one" fundraising solutions with limited website integration potential (in my opinion…), we’re going to see more and more of an arms race between these two companies as they try to “out featurize” each other in marketing to nonprofits. In other words, in spite of the benefits from integrating Drupal and Salesforce, we’re going to see big nonprofit CRM offerings like Salesforce and Blackbaud push competing solutions that try to do everything in their own proprietary and sometimes clunky ecosystems.

The Internet of Things

The Internet of Things (IoT), or the interconnectivity of embedded Internet devices, is not a new concept for 2015. We’ve seen the rise of random smart things, from TVs to refrigerators, for the last few years. While the world’s population is estimated to reach 7.7 billion in 2020, the number of Internet-connected devices is predicted to hit 26 billion that same year. Apple’s announcement of its forthcoming Watch last year tolled the the first meaningful generation of wearable technology. Of course, that doesn’t necessarily mean that you’ll want to wear this stuff just yet, depending upon your fashion sense...


(Image from VentureBeat’s coverage of the 2015 Consumer Electronics Show last week. Would you wear these?)

However, the advent of the wearable Internet presents many opportunities to the nonprofit sector, both as a delivery device for micro-campaigns and targeted appeals, and as a tool for collecting information about an organization’s constituency. Our colleagues at BlueSpark Labs recently wrote about how these technologies will allow organizations to build websites that are really "context-rich systems." For example, with an Internet-connected watch synced up to a nonprofit’s website, that organization could potentially monitor a volunteer athlete’s speed and heart rate during a workout. These contextualized web experiences could drive deeper feelings of commitment among donors and other nonprofit supporters.


(Fast Company envisions how the NY Times might cover election result's on the Apple Watch.)

Privacy and Security

While not exactly a trend in nonprofit technology, I will be interested to see how the growing focus on Internet privacy and security will affect online fundraising and digital engagement strategies this year.


(A poster for the film, The Interview, as most of you probably know, the film incited a major hack of Sony Studios and spurred international dialog about cyber security.)

We are seeing more and more startups providing direct-to-consumer privacy and security offerings. This last year, Apple release Apple Pay which adds security, as well as convenience, to both online and in-person credit card purchases. And Silent Circle just released Blackphone - an encrypted cell phone with a sophisticated and secure operating system built on top of the Android platform.


How might this focus on privacy and security affect the nonprofit sector? It’s hard to say for sure, but nonprofits should anticipate the need to pay for more routine security audits and best practices regarding maintenance of their web properties, especially as these tools begin to collect and leverage more constituent data. They should also consider how their online fundraising tools will begin to support new online payment formats, such as Apple Pay, as well as virtual currencies like BitCoin.

And Away We Go…

At ThinkShout, we’ve already rolled up our sleeves and are excitedly working away to implement many of these new strategies and approaches for our clients in 2015. What are you looking forward to seeing in the world of of nonprofit tech this year? What trends do you see on the horizon? Let us know. And consider swinging by the "Drupal Day for Nonprofits" event that we’re organizing on March 3rd in Austin, TX, as part of this year’s Nonprofit Technology Conference. We hope to dream with you there!

Jan 20 2015
Jan 20


2015-08-12 (All day) - 2015-08-15 (All day) America/Chicago


This is a place holder to get MWDS on the calendar.
More details soon.

Will be hosted at
Wednesday-Saturday all sprint days. No sessions.
Focus on getting Drupal 8 released (and some key contrib ports to Drupal 8).

Jan 20 2015
Jan 20

On of the things I've blogged about recently when talking about my upcoming book Model Your Data with Drupal is domain-driven design. While domain-driven design is important and something that I hope to touch on in the future, I've decided it's too much to cover in one book, and I'm refocusing Model Your Data with Drupal on basic object oriented principles.

If you want to know more about this decision and what will be covered in Model Your Data with Drupal, read on.

On studying complex subjects

There are a lot of ways to approach learning a complex, layered subject. You could just dive right in and see if there are any gaps in your foundational knowledge and work backwards to fill in those gaps. Others (like myself) like to try to figure out the best starting place before diving in and work incrementally, building a solid base of foundational knowledge that can be used as a platform for deeper learning. Of course there are advantages and disadvantages to both methods, but I often find that even though I'm inclined to work slowly and try to start at the beginning, I often get farther more quickly when I try to dive into advanced subjects and see what I'm lacking along the way.

Punching above your weight isn't a new idea, and if I had the time and patience to work through it, that's exactly how I would structure Model Your Data with Drupal. But it's not a perfect world, and I ran into some problems along the way. Basically I was trying to leave out too many layers in the software development layer cake. Most people really like cake, so that seemed like a pretty bad idea.

A tasty layer cake of software development

"What is this software development layer cake?" you may ask. It's simply the layers of foundational knowledge that build upon one another. Take this example from my recent presentation on Object Oriented (OO) Design Patterns for Drupal 8:

OO principles layer cake

In this example, more foundational material is at the bottom, (even though basics such as understanding syntax, data structures, control structures, etc have been left out):

OO Basics: The features that define what makes a system or program object oriented: Abstraction, Encapsulation, Polymorphism, Inheritance

OO Principles: The axioms or best practices, these are to OO programming, what principles likes Don't repeat yourself are to procedural or functional programming: Encapsulate what varies, program to interfaces, favor composition over inheritance, strive for loosely coupled designs, depend on abstractions, etc.

OO Patterns: Finally, all of the above basics and principles give us a series of patterns that come naturally, such as decorator, factory, observer, strategy, facade, singleton, etc.

When you put these all together you get something like this:

OO principles layer cake with examples

Enter domain-driven design

This seems simple enough, even though it covers lots of ground. But you may be asking, where does domain-driven design come into this? Is it foundational, or another layer on the top of the cake?

Yes. It's a bit of both, domain-driven design is a process that informs patterns and principles, but it's also something that builds on all of the above, since it requires familiarity with the patterns and principles before you can speak fluently about a project. That doesn't seem like such a hard requirement until you consider that one of the main tenants of domain-driven design is ubiquitous language for bringing developers, engineers, analysts, and domain experts together.

The current

The problem I kept running into while working on Model Your Data with Drupal, was that it was hard to explain the high level concepts of domain-driven design, while also covering the low-level nitty gritty of writing PHP for Drupal. Putting myself in the readers position, it seemed like there was plenty of material for the top and bottom levels of the cake, without much in between.

Because of this, I've decided to put the discussions of domain-driven design and Drupal on hold for now. Model Your Data with Drupal will instead focus on basic object oriented principles and a few patterns, where applicable. If you're reading this book, you should be able to get something that you can easily apply to real world projects, without too much extra fluff, and refocusing this will make the book more clear and easier to understand and put into practice.

The future

The other major factor for this change is that Drupal 8 is looming in the future, with massive changes coming for every Drupal developer. You may have heard about it's sweeping changes, with tons of object oriented systems and dependency injection everywhere. These changes are going to make it easier than ever to apply objected oriented principles to your Drupal projects and modules. Because of this it's going to be easier to describe OO principles and domain-driven design in context of Drupal 8.

Want to learn more?

If you're interested in learning more about the book, you can read more or sign up for the mail list at Model Your Data with Drupal.

Jan 20 2015
Jan 20

Drupal 8 represents a radical shift, both technically and culturally, from previous versions. Perusing through the Drupal 8 code base, many parts may be unfamiliar. One bit in particular, though, is especially unusual: A new directory named /core/vendor. What is this mysterious place, and who is vending?

The "vendor" directory represents Drupal's largest cultural shift. It is where Drupal's 3rd party dependencies are stored. The structure of that directory is a product of Composer, the PHP-standard mechanism for declaring dependencies on other packages and downloading them as needed. We won't go into detail about how Composer works; for that, see my article in the September 2013 issue of Drupal Watchdog, Composer: Sharing Wider.

But what 3rd party code are we actually using, and why?

Crack open your IDE if you want, or just follow along at home, as we embark on a tour of Drupal 8's 3rd party dependencies. (We won't be going in alphabetical order.)


Perhaps the easiest to discuss is Guzzle. Guzzle is an HTTP client for PHP; that is, it allows you to make outbound HTTP requests with far more flexibility (and a far, far nicer API) than using curl or some other very low-level library.

Drupal had its own HTTP client for a long time... sort of. The drupal_http_request() function has been around longer than I have, and served as Drupal's sole outbound HTTP utility. Unfortunately, it was never very good. In fact, it sucked. HTTP is not a simple spec, especially HTTP 1.1, and supporting it properly is difficult. drupal_http_request() was always an after-thought, and lacked many features that some users needed.

What's more, it was a single function – one single 304 line function with a cyclomatic complexity of 41 and an N-Path complexity of over 25 billion. That's a fancy way of saying "completely and utterly impossible to unit test before the heat death of the universe." For a modern web platform, that's simply not good enough. (For more on cyclomatic complexity and N-Path complexity, see Anthony Ferrara's talk from DrupalCon Portland, “Development by the Numbers.”)

As we said, though, writing a good HTTP client is quite hard, and we already had plenty of hard tasks to do in Drupal 8. So instead, we outsourced it. After conducting a comparison survey of over a half-dozen different HTTP clients for PHP, we settled on Guzzle as the most feature-rich. The developer's decision to refactor Guzzle itself – to make it easier for Drupal to use just the portions we wanted – helped, too.

Guzzle actually has a lot of other capabilities that we're not using in core, but can be downloaded quite easily. One of the most interesting is the ability to auto-map RESTful services to PHP classes. (See the Guzzle documentation for more information.)

That same thought process applies to much of Drupal 8: “This is going to be really hard to do, but someone already did it. Let’s just save time and use theirs. Open Source is cool like that.”

Doctrine Annotations

Doctrine is a large project. It's most known for its database abstraction layer (DBAL) – for which Drupal already has "DBTNG" – and for the Doctrine ORM object-relational-mapper – for which the Entity and Field system already serves Drupal well, even if it doesn't have as cool-sounding a name. So what's Doctrine doing in Drupal?

The new plugin system in Drupal 8 makes use of "annotations". Annotations are a way to define special Docblock tags that can be parsed at runtime to provide metadata for a class or method. Essentially, they serve a similar purpose to "info hooks" in previous versions of Drupal, but keep that metadata right next to the class they describe. Of course, parsing those annotations into useful information takes work. (Some languages have native annotation support, but PHP does not; we have to rely on the docblock.)

As with Guzzle, why do that work when it's already been done? Doctrine's flavor of annotations is one of the more commonly used in PHP, so we adopted that and its annotations library. We even managed to submit some work back upstream to improve the efficiency of its parsing engine.

Easy RDF

The story with Easy RDF is much the same. Managing RDF graphs is complicated: It's better for there to be a few really good libraries for it than lots of mediocre ones. So we just adopted an existing one that worked. (Notice a pattern emerging?)

Zend Feed

“Zend? I thought Drupal was using Symfony!”
Open source isn't partisan. As with many other libraries, Drupal had an old and half-implemented RSS parser in the Aggregator module that left much to be desired. (Actually we had two; Views has an RSS generating routine.)

The most robust RSS and Atom parser right now in PHP is the Feed library out of Zend Framework 2. After some discussion with the Zend Framework maintainers, they were able to remove a few dependencies from it, which made it small enough for Drupal to leverage. Out with the Aggregator RSS parser, in with Zend Feed. As a bonus, although core isn't using it, we now have a full-featured Atom parser ready to go for any module that wants to use it. As of this writing Views isn't leveraging it yet, but there's an open issue to do so. (Volunteers welcome.)


The Twig template is worthy of its own article (see Morten DK’s article in this issue). Its genesis was – you guessed it – much the same.

By the time of DrupalCon Denver, in early 2012, most core developers had concluded that PHPTemplate was no longer viable and needed to be put out to pasture. We needed something to replace it and, as we were already leveraging Symfony by that point, “let's look elsewhere” was a viable strategy. What really sold core developers on Twig was simple: Front-end developers demanded it. Twig offers them a far nicer experience than PHPtemplate ever did, so core developers went “Okay, you want it, you got it!” That said, moving Drupal to Twig has taken a sizeable army of front-end developers, many working in core for the first time.


Assetic is an asset management library; that is, it helps manage CSS and Javascript files. As is the trend, it is replacing much of Drupal's home-grown CSS/JS compression and aggregation logic. Work is still happening in this area, and it's highly unlikely that module developers will ever deal with it directly, but it's there.


PHPUnit is the industry standard testing framework for PHP. Although it has not fully replaced Drupal's home-grown testing framework yet, it is slowly supplanting it. For most code written for Drupal 8, if it cannot be tested with PHPUnit, then your code is flawed. (See Sam Boyer’s article in this issue, PHPUnit and Drupal.)

PSR Logger

The tiny Psr\Log library is just a collection of interfaces released by the Framework Interoperability Group (PHP-FIG) to standardize logging. As of this writing, it is only included because it's a dependency for some Symfony components, but there is active work to replace the watchdog() function with a new logger that uses the same standard interface. (With due apologies to the editors.)


Gliph is an interesting case. It was written by Drupal developer Sam Boyer to solve a Drupal problem, but there is nothing Drupal-specific about it. Sam simply decided to build it outside of Drupal as an MIT-licensed library that Drupal, or anyone else, could then import. That's an approach that I expect will become increasingly popular in coming years, both for core and contrib. Gliph is a graph management library, that is, mathematical graphs such as dependency trees. It will be used to complement Assetic, and again it's unlikely that module developers will ever use it directly. (But if you need dependency resolution logic, it's there – go for it!)


Last but not least, there's Symfony. Drupal 8 is not using all of Symfony by any means; in fact, we're using less than a third of the component libraries it offers and none of the fullstack framework. Nonetheless, it shares the same core pipeline with many other projects in the Symfony family. Many of these can and do have their own articles, so for now we'll just give a cursory review of them.

HttpFoundation / HttpKernel

These are the libraries that started it all. HttpFoundation was the first significant 3rd party library added to Drupal 8, followed soon after by HttpKernel. HttpFoundation abstracts the HTTP Request and Response concepts, replacing PHP's native superglobals and "print, but hope you don't have cookies" mess. HttpKernel essentially provides an interface for mapping a request to a response; a simple concept, it's actually fundamental to what any web application does. It also includes many powerful standard implementations that Drupal is leveraging, including the default HttpKernel itself.

Once again, the need was to replace Drupal's page-only routing system with a pipeline that could handle the full power of HTTP, and do so with a more self-documenting API. After designing one in the abstract, the Web Services Initiative found that Symfony had already implemented essentially what we had concluded we needed.
Open Source – For The Win!

Routing / CMF Routing

Routing is the process of mapping an incoming request to the code that will handle that particular request. In Drupal 7, it was hook_menu, menu_get_item(), and page callbacks. In Drupal 8, it's Symfony's Routing component with enhancements from the Symfony CMF project.

The Symfony CMF Routing component was actually a close collaboration between Drupal and Symfony CMF; despite nominally being competitors, both projects saw the value in working together to build one really solid routing framework.


The HttpKernel library makes use of the Symfony EventDispatcher library as well. Events are, essentially, object-oriented testable hooks. The low-level parts of Drupal 8 are using events, while many older systems are still using hooks. For Drupal 8 contributed modules, it's a good idea to focus on using events (as well as plugins) over hooks in most cases. There is serious talk of removing hooks as redundant in Drupal 9, so get a head start on that transition (and make your code more testable to boot).


The last big library is the Dependency Injection component. It provides the Dependency Injection Container, or simply "container", that ties all of Drupal's loosely coupled libraries (both 3rd party and home grown) together into a cohesive system. Almost all developers will be interacting with it, but mostly through a services.yml file rather than dealing with its low-level APIs directly.


The Serializer component is a simple framework for managing the serialization and deserialization of objects to various string formats. Drupal 8 is using it as part of the REST framework, as it provides a common way to convert Entities to and from different formats like JSON-HAL (our default), XML, etc. If you want to support a new format for Entities (say, JSON-LD or Collection or some XML format), you'd write new services for the Serializer, and the rest would wire itself up automatically.


This little library helps structure validation of data rules. It is used deep within the Plugin and Entity systems, and most module developers won't be interacting with it directly.


Finally there is YAML. YAML is a text file format that Drupal 8 is using for many configuration files. Symfony has a YAML parser, we needed one, you know the drill by now – Open Source FTW.

Reuse All the Things!

All of that is found in one simple directory. That's the advantage of decoupled libraries and easy sharing between projects: Do less work, reuse more code, get more done faster.

That's the power of /vendor.

Image: ©profotokris/123RF.COM

Jan 20 2015
Jan 20

One tool for stylesheets

Laziness tends to get in the way of progress, but it doesn’t have to! There is now a tool to help out with all of those steps in the CSS process that we don’t want to take. This is especially important now that we have mobile-first orientation in web-development. We should make a lot of optimizations for our code to decrease page loading time and make our users happy. New useful tools are created every day and staying up to date with all of them is really hard. That is exactly why programmers try to collect as many tools as possible, all in one. Pleeease is the perfect example of this kind of tool, it is actually a web-development Swiss Army knife.

What is it?

Pleeease is a CSS post-processor based on Node.js and it contains a lot of CSS tools in it. It can really do magic to make your stylesheets better for production usage and it also corrects the consequences of CSS pre-processors(sass, less or stylus) usage. You can use it as separate tool from Node prompt or as Gulp.js plugin in your tasks. To use it you must install Node.js to you system and Gulp.js if you prefer to use Pleeease automatically in your project task. It has simple JSON-like configuration file. If you need to change some defaults, just create .pleeeaserc file in your project folder or declare the settings directly in gulpfile if you use it.

What can it do?

The first thing that you can do is set in the config file the source and destination using “in” and “out” parameters. Example:


 "in": "*.CSS",
 "out": "app.min.CSS",


So let’s explore all of the amazing features that Pleeease has in it arsenal.


It sets the correct browser prefixes for CSS3 properties according to the browser support that you need and it sets only necessary prefixes with CanIUse database to help instead of some tools like Compass that set all of them. You can specify the version of browser, min or max version, browsers with some global usage percent and much more! All available settings can be found at the official GitHub page. By default Pleeease use all browsers with global usage more than 1%, the last 2 versions of all browsers, Firefox ESR and Opera 12.1. Example of post-processing with defaults:

background: linear-gradient(red, blue);

in the output file:

background: -webkit-gradient(linear, left top, left bottom, from(red), to(blue));

background: -webkit-linear-gradient(red, blue);

background: linear-gradient(red, blue);


CSS filters effects is a part of CSS3 draft and only WebKit-based browsers support them now. Firefox uses SVG fallback and old IE uses its own filters, modern IE(10-11), Opera mini and native Android browser have no support at all. But, you can use them now in parts of your project that will not affect the functionality, but add some wow-effect to your pages for users with “good” browsers.

Pleeease makes it simple to declare in your CSS, just use the standard syntax and the tool will add prefixes to it, create the svg fallback and old IE filter syntax if you set “filters”: {“oldIE”: true}, because by default it is set to false.

Example with blur filter:


filter: blur(3px);


filter: url('data:image/svg+xml;utf8,">#filter');
-webkit-filter: blur(3px);
filter: blur(3px);
filter: progid:DXImageTransform.Microsoft.Blur(pixelradius=3);

rem units

CSS3 provides us with root em units that are close to standard em but only the font-size of root elements affect them; instead of the parent for em. Unfortunately, there is still lackluster browser support of it.  Pleeease finds all rem declarations in our files and adds a pixel fallback to them, so old browser users are still happy.


h1 {

 font-size: 2rem;


h1 {
 font-size: 32px;
 font-size: 2rem;


It converts CSS3 syntax for declaring pseudo-elements with the old ones for back-compatibility of old browsers like IE8, modern browsers support both syntaxes, for example ::after will be converted to :after to avoid bugs.

Opacity effect

Old IE’s have their own filter properties for an opacity effect, they do not have a friendly syntax, but Pleeease will help you with it. This tool automatically supplements all opacity properties with filter. Lets see how it works in practice:


h2 {
 opacity: .25;


h2 {
 opacity: .25;
 filter: alpha(opacity=25);

Media query packer

Preprocessor tools provide us with the opportunity to write media query breakpoints directly with other CSS properties. This makes our code more readable and maintainable so we can see all of the element changes and throw all breakpoints in one place. But, the generated CSS after writing code this way is not as pretty as we’d like it to be. Our tool can help us in this situation again, it will analyze CSS code, find all @media declarations, match them and concatenate them in one so our CSS will be look great again and performance will grow too. Here is the simple example of how it works:


h1 {

font-size: 2em;

 @media screen and (min-width: 768px) {

   font-size: 1.5em;



h2 {

font-size: 1.75em;

 @media screen and (min-width: 768px) {

   font-size: 1.25em;




h1 {

font-size: 2em;


h2 {

 font-size: 1.75em;


@media screen and (min-width: 768px) {

 h1 {

font-size: 1.5em;


h2 {

font-size: 1.25em;



Source maps, imports, minifier

And finally Pleeease can inline all of your import declarations, generate source maps that allow you to use browser direct editing in debugging tools and minify CSS code to decrease output file size. I hope that it will have a lot of more useful features in future.

Why should I use it?

There are several reasons, the first: it is just one tool for all stylesheets stuff except compilation of sass, less, stylus, whatever, but it works for specialized pre-processing tools. The second: there are even more features to come, because this list of functions is not totally complete, you can read about experimental features that will soon be added to Pleeease on their website. The Pleeease web-site also provides native CSS variables, colors functions and a lot of other new CSS features support. Also, this tool can decrease you gulpfile code if you build a project with Gulp because it replaces a lot of separate gulp plugins so you can process your style with only with 2 actions: preprocessing sass, less or stylus code and postprocessing CSS code. So enjoy your virtual all in one web development tool! I think it’s pretty cool! If you have any questions about how it all works, contact us, we'd like to help!

Jan 20 2015
Jan 20

Code structure is something most Drupal developers wrestle with. There are tons of modules out there that make our lives easier (Views, Display Suite, etc.) but managing database configuration while maintaining a good workflow is no easy challenge. Today I'm going to talk about a few approaches I use in my work here at Echo. We will be using a simple use case of creating a paginated list of blog posts. To start, we're going to talk about the workflow from a high level, then we'll get into the modules that leverage Drupal in a way that makes sense. Finally, we'll have some code samples to help guide things along.


This will vary a bit based on what you need, but the idea behind this is we never want to redo our work. Ideally we'd like to design a View or functionality once on our local, and then package it and push it up. Features is a big driving force behind this. Beyond that, we want things like page structures and custom code to have a place to live that makes sense. So, for this example we will be considering the idea of a paginated list of Blog Posts. This is a heavy hammer to be swinging at such a solved task, but we will get into why this is good later on.

  • Create a new Feature that requires ctools and panels (and not views!)
  • Open up the generated .module file and declare the ctool plugin directory
  • Create the plugins/content_types/ file
  • Define the needed functions within to make it work
  • Add the newly created content type to a page in Page Manager
  • Add everything we care about to the Feature and export it for deployment


This only assumes that you have a working Drupal installation and some knowledge of how to install modules. In this case, we will be using drush to accomplish this, but feel free to pick your poison here. Simply run the following commands and answer yes when prompted.

drush dl ctools ds features panels strongarm
drush en ctools ds features panels strongarm page_manager

What we have done here is install and enable a strong foundation on which we can start to scaffold our site. Note that I won't be getting into folder structure too much, but there are some more steps before this you would have to take to ensure contrib, custom, and features all make it to their own place. We wave our hands at this for now.


The first thing we're going to do is generate ourselves a Feature. Simply navigate to Structures -> Features -> Create Feature and you will see a screen that looks very similar to this. Fill out a name, and have it require ctools and panels for now.

Features screen

This will generate a mostly empty feature for us. The important part we want here is the ability to turn it on and off in the Features UI, and the structure (that we didn't have to create manually!) which includes a .module and .info file is ready to go for us. That being said, we're going to open it up and tell it where to find the plugins. The code to do that is below, and here is a screenshot of the directory structure and code to make sure you're on the right track. Go ahead and create the plugins directory and associated file as well.

function blog_posts_ctools_plugin_directory($owner, $plugin_type) {
  return 'plugins/' . $plugin_type;

Chaos Tools

Known more commonly as ctools, this is a module that allows us this plugin structure. For our purposes, we've already made the directory and file structure needed. Now all we have to do is create ourselves a plugin. There are three key parts to this: plugin definition, render function, and form function. These are all defined in the .inc file mentioned above. There are plenty of resources online that get into the details, but basically we're going to define everything that gets rendered in code and leverage things like Display Suite and the theme function for pagination. This is what we wind up with:

* Plugin definition
$plugin = array(
  'single' => TRUE,
  'title' => t('Blog Post Listing'),
  'description' => t('Custom blog listing.'),
  'category' => t('Custom Views'),
  'edit form' => 'blog_post_listing_edit_form',
  'render callback' => 'blog_post_listing_render',
  'all contexts' => TRUE,
* Render function for blog listing
* @author Austin DeVinney
function blog_post_listing_render($subtype, $conf, $args, &$context) {
  //Define the content, which is built throughout the function
  $content = '';
  //Query for blog posts
  $query = new EntityFieldQuery();
  $query->entityCondition('entity_type', 'node', '=')
    ->entityCondition('bundle', 'blog_post', '=')
    ->propertyCondition('status', NODE_PUBLISHED, '=')
  //Fetch results, and load all nodes
  $result = $query->execute();
  //If we have results, build the view
  if(!empty($result)) {
    //Build the list of nodes
    $nodes = node_load_multiple(array_keys($result['node']));
    foreach($nodes as $node) {
      $view = node_view($node, 'teaser');
      $content .= drupal_render($view);
    //Add the pager
    $content .= theme('pager');
  //Otherwise, show no results
  else {
    $content = "No blog posts found.";
  //Finally, we declare a block and assign it the content
  $block = new stdClass();
  $block->title = 'Blog Posts';
  $block->content = $content;
  return $block;
* Function used for editing options on page. None needed.
* @author Austin DeVinney
function blog_post_listing_edit_form($form, &$form_state) {
  return $form;

Some things to note here. We're basically making a view by hand using EntityFieldQuery. It's a nifty way to write entity queries a bit easier and comes with some useful how to's on We also offload all rendering to work with Display Suite and use the built-in pagination that Drupal provides. All things considered, I'm really happy with how this comes together.


Finally, we need to add this to the page manager with panels. Browser to Structure -> Pages -> Add custom page and it will provide you with a step by step process to make a new page. All we're going to do here is add our newly created content type to the panel, as shown here.

Panel screen

And now, we're all ready to export to the Feature we created. Go on back to and recreate the feature and you're ready to push your code live. After everything is said and done, you should have a working blog with pagination.

Blog screen .


Obviously, this example is extremely basic. We could have done this in a View in far less time. Why would we ever want to use this? That's a great question and I'd like to elaborate on why this is important. Views are great and solve this problem just as well. They export nicely with Features and can even play with Panels (if you want to use Views as blocks or content panes). That being said, this is more for the layout of how we would have custom code that works with a lot of Drupal's best practices. Imagine instead if we have a complicated third party API we're trying to query and have our "view" react to that. What if we want a small, code-driven block that we can place discretely with panels? The use cases go on, of course.

There are many ways to solve problems in Drupal. This is just my take on a very clean and minimal code structure that allows developers to be developers and drive things with their code, rather than being stuck clicking around in menus.

Jan 20 2015
Jan 20

When building a Drupal 7 site, one oft-used technique is to keep the entire Drupal root under git (for Drupal 8 sites, I favor having the Drupal root one level up).

Starting a new project can be done by downloading an unversioned copy of D7, and initializing a git repo, like this:

Approach #1

drush dl
cd drupal*
git init
git add .
git commit -am 'initial project commit'
git remote add origin ssh://

Another trick I learned from my colleagues at the Linux Foundation is to get Drupal via git and have two origins, like this:

Approach #2

git clone --branch 7.x drupal
cd drupal
git remote rename origin drupal
git remote add origin ssh://

This second approach lets you push changes to your own repo, and pull changes from the Drupal git repo. This has the advantage of keeping track of Drupal project commits, and your own project commits, in a unified git history.

git push origin 7.x
git pull drupal 7.x

If you are tight for space though, there might be one inconvenience: Approach #2 keeps track of the entire Drupal 7.x commit history, for example we are now tracking in our own repo commit e829881 by natrak, on June 2, 2000:

git log |grep e829881 --after-context=4
commit e8298816587f79e090cb6e78ea17b00fae705deb
Author: natrak 
Date:   Fri Jun 2 18:43:11 2000 +0000

    CVS drives me nuts *G*

All of this information takes disk space: Approach #2 takes 156Mb, vs. 23Mb for approach #1. This may add up if you are working on several projects, and especially if for each project you have several environments for feature branches. If you have a continuous integration server tracking multiple projects and spawning new environments for each feature branch, several gigs of disk space can be used.

If you want to streamline the size of your git repos, you might want to try the --depth option of git clone, like this:

Approach #3

git clone --branch 7.x --depth 1 drupal
cd drupal
git remote rename origin drupal
git remote add origin ssh://

Adding the --depth parameter here reduces the initial size of your repo to 18Mb in my test, which interestingly is even less than approach #1.

Jan 20 2015
Jan 20

If you are not already using Git on your Drupal websites or projects, now is the time to learn. Over the next week or two, I will be going over a brief introduction to Git in 5 parts. In the following post, I will provide a quick overview of Git and Git hosting services. In subsequent parts, I will walk through examples of Git commands and what they do. In the 5th and final part I will bring it all together with examples of how Git is commonly used with Drupal.

Git is one of the secrets from my 5 secrets to becoming a Drupal 7 Ninja ebook, and much of the following posts are from this ebook. To learn more about Git and the other secrets, please consider purchasing the ebook or signing up for the 5 secrets email newsletter which will give you more information about the 5 secrets.


You have probably heard of Version Control or Git before. If you are not already using a Version Control system now is the time to start. According to the Git website:

Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and

So what exactly is a Version Control System? To keep things simple, it is basically a way to track changes that you have made to files over a period of time. It gives you the ability to not only track those changes, but roll back to a previous point in time if something goes bad. A version control system also makes it much easier for multiple developers to work on a single project without stepping on each other’s toes.

Git is a Distributed Version Control System, which means that every developer working on a project has a full copy of the repository. A repository is just another name for how the project and its files are stored in the version control system. Generally when working with Git you will have some type of server that you push your changes too. Often this will be a third party service like GitHub, Bitbucket, or one of the many other alternatives.

Choosing the Right Service to Host Your Git Repository

There are a lot of options to consider when choosing where (and if) you want to use a third party service to host the Git repository for your project. These services provide a lot of useful tools that make working with your Git repository easier. Some standard tools to keep an eye out for include:

  • Ability to view the code of your Git repository
  • Issue or Bug tracking
  • Create and manage Git branches of code
  • Built in Code Review Tools
  • Collaboration tools to make building a software project with a team easier

There are typically many more features, but that is a basic list that almost all Git hosting services offer. It is best to do your own research here as opinions tend to vary on which is the best. The most popular one is probably Github. It provides a great interface and great collaboration tools. Github is especially popular in the open source software market. Github is free to use as long as you make your Git repository public. Github charges for you to have a private repository. Github bases its fees on the number of private repositories you require.

Bitbucket is another popular choice. Bitbucket has free Git project hosting for teams of 5 or less. Bitbucket allow for an unlimited number of public or private Git repositories. All of the fees for Bitbucket are based on the number of people on the team (not the number of repositories). This distinct difference between Bitbucket and Github often helps you decide based on the type of project you are building and the team size (assuming you are basing your decision only on price). There are many other options out there, but these are the most widely used that I am aware of.

So which Git project hosting service do I use? Well... both actually. I prefer using Github for any type of open source project. Github’s interface and collaboration tools are slightly better than Bitbucket’s in my opinion. I do however use Bitbucket much more than I use Github. Because I often tend to work on projects in small teams, and I need private repositories for much of my work, Bitbucket is the logical choice. I also don’t want to discount the tools in Bitbucket as they too are really good (Github is just slightly more user friendly).

Ninja Lesson: All Git Hosting services will follow the same constructs. Learn Git and you can easily adapt to the hosting service of your choosing.

Getting Started with Git

So how do you go about getting start with Git if you have never worked with a Version Control System before? The first steps are to start by downloading Git for your operating system. Once you have Git downloaded and installed, you may be tempted to download a Git GUI client. You can browse for one of your choosing and try one out (I have used GitEye with some success in the past as it provides a Linux version). I won’t be covering Git GUI clients because frankly I don’t like using them and I think they shroud what is actually happening (sometimes making it seem more confusing than it has to be). Even if you do want to use a Git GUI client, I highly suggest learning the basics from the command line first. This will give you a much deeper understanding of what various commands are doing and how the entire Git process works.

Intro to Git Part 1 Summary

In the subsequent 4 parts, you will be able to follow along to create your first Git repository, learn the basics of Git commands, create a larger Git repository for your Drupal website, and learn how to pull down external Git repositories (like those on Github or Bitbucket).

So start out by following the instructions for your Operating system and getting Git installed. In the introduction to Git part 2, we will get started with some basic Git commands and configuration.

Jan 20 2015
Jan 20

The next beta for Drupal 8 will be beta 5! Here is the schedule for the beta release.

Tuesday, January 27, 2015 Only critical and major patches committed Wednesday, January 28, 2015 Drupal 8.0.0-beta5 released. Emergency commits only.
Jan 19 2015
Jan 19

Recently, we were debugging some performance issues with a client's Drupal Commerce website. After doing the standard optimizations, we hooked up New Relic so we could see exactly what else could be trimmed.

The site is using different line item types to differentiate between products that should be taxed in different ways. Each line item type has a field where administrators can select the tax code to use for that line item type. The options for the select list are populated via an API call to another service provider. The call for the list was using the static cache because it was thought that the list would only be populated when needed on the line item type configuration page. In reality, that's not the case.

When an Add to Cart form is displayed in Drupal Commerce, it also loads the line item type and the line item type's fields. When loading the fields, it loads all of the options even if the "Include this field on Add to Cart forms for line items of this type" option is not enabled for that field. In this case, it resulted in 90 HTTP calls to populate the list of tax codes every time someone viewed a page with an Add to Cart form.

The solution was to actually cache those results using Drupal's Cache API. You can see the improvement:

Jan 19 2015
Jan 19


Almost all Drupal websites will have multiple Views displays containing output of various content but your options to sort this content are usually limited. Most Views displays can only practically be sorted by creation date or node title. This will work well in many cases but if you need to implement user-friendly manual controlled sorting then you will need expand Views.

There’s a page on comparing various Node ordering Modules but our favorite is DraggbleViews and in this article we’ll show you how to use it to create a drag and drop sortable image gallery.


  • Install the latest 7.x.2.x branch of DraggableViews from For Drush users the project name is 'draggableviews'
  • Module dependencies: Views, Chaos tools, Entity API


DraggableViews will allow you to make rows of a Drupal View "draggable" which means that they can be rearranged using Drag and Drop interface. For this example we've created a Content Type called Images that contains an Image Field that will be used for a photo gallery page on our website.

We then created multiple Image Nodes containing stock photography images. Our View is currently limited to a few sorting options like creation date and title but.we want to allow for our editors to easily reorder the images so now we’ll setup DraggableViews and create a sorting interface.

  1. Edit your existing View that you want to be sortable (in this example it’s our image page View).
  2. Add a new display to your existing View. This new display should normally be a Page display type and this is what will be used as the sorting interface.
    Drupal View for Drag Drop sorting
  3. Setup your new View's Display similar to the following (This will vary depending on your specific needs).
    • IMPORTANT: Be sure to override this sorting display for all applicable settings so you you’re not also changing the main View display when we edit the Sorting Display's options in the following steps.
    • Set Display and Title to reflect your sorting display. Example, “Sort Images”.
    • IMPORTANT: The display format must be set to Table.
    • Add only the minimal amount of fields needed to be visible for your node’s sorting purposes. For example, only display the node titles or, as in this example, small thumbnails of the node’s image.
      • Add a title, image thumbnail, or some other visual reference field.
      • Add a NID (Node ID) field and be sure to select ‘Exclude from Display’.
      • Add the “DraggableViews” field, leaving the default settings.
    • Remove any sorting criteria that may already be in your View and add the “DraggableViews Weight” field as the sort criteria for both this sorting View and the main display View. The parent View’s DraggableViews sort field needs to be set to use the new sorting View in its “Display Sort as” setting.
    • Give your sorting View a page path. I like to use something like “admin/content/sort/photos” so a menu link will be available in the administration menus. Make this a ‘Normal Menu Entry” and be sure to set the Views menu path to ‘Management’.

    Drupal DraggableViews Views Setup

    Once you save your View you should now have a page containing the content of your View with handles to the right of them for drag and drop sorting.

    drag drop sorting interface drupal views


    Be sure to the appropriate permissions for your sorting View. There is also a “Access draggable views” permission that must be granted. If a user has access to the sorting View but does not have the “Access draggable views” permission then they will see the View without the drag and drop handles.

    Drupal Draggable Views Permissions


    If you set your sortable View's path to something like '“admin/content/sort/photos' as described above and also set a "Normal menu entry" for the "Management" menu then you will now have a menu link to your sorting page from your administration menu.

    Your new sorting page will also be accessible by the View's Contextual Link, allowing for direct and quick access to sorting the View right from the main page.

    Draggable Views sort menu


    You should not rely on View's live preview as it may differ from the actual output.

    The reordering may not work if you have Caching turned on for your View. The drag and drop may work but upon saving your ordering it will revert back to the previous order. If you need caching then you will need to create a separate display for sorting and turn caching off for that View only.

Jan 19 2015
Jan 19

Any results of the color alterations, once they’re made, can be observed in the view block.

The tips are displayed after clicking on the text field.

After all necessary settings done, save them. Let’s look at the result.

User/login form has been tuned into the Bartik theme:

The login form:

Authorization block, which is situated on its left column by default:

The module itself is available here.

Jan 19 2015
Jan 19

In this article I am going to show you how to create a custom Views field in Drupal 8. At the end of this tutorial, you will be able to add a new field to any node based View which will flag (by displaying a specific message) the nodes of a particular type (configurable in the field configuration). Although I will use nodes, you can use this example to create custom fields for other entities as well.

So let's get started by creating a small module called d8views (which you can also find in this repository):

name: Drupal 8 Views Demo
description: 'Demo module that illustrates working with the Drupal 8 Views API'
type: module
core: 8.x

In Drupal 7, whenever we want to create a custom field, filter, relationship, etc for Views, we need to implement hook_views_api() and declare the version of Views we are using. That is no longer necessary in Drupal 8. What we do now is create a file called in the root of our module and implement the views related hooks there.

To create a custom field for the node entity, we need to implement hook_views_data_alter():

 * Implements hook_views_data_alter().
function d8views_views_data_alter(array &$data) {
  $data['node']['node_type_flagger'] = array(
    'title' => t('Node type flagger'),
    'field' => array(
      'title' => t('Node type flagger'),
      'help' => t('Flags a specific node type.'),
      'id' => 'node_type_flagger',

In this implementation we extend the node table definition by adding a new field called node_type_flagger. Although there are many more options you can specify here, these will be enough for our purpose. The most important thing to remember is the id key (under field) which marks the id of the views plugin that will be used to handle this field. In Drupal 7 we have instead a handler key in which we specify the class name.

In Drupal 8 we have something called plugins and many things have now been converted to plugins, including views handlers. So let's define ours inside the src/Plugin/views/field folder of our module:


Jan 19 2015
Jan 19

It is frequent that customers approach us asking for help to rescue their projects from site builders. Sometimes they have technological issues (mainly slow sites) but sometimes it's just plain bad usability os some wrong marketing concepts.

We recently were asked for help from a site that gets about 5,000 unique visitors a day. Despite the not so bad visitor numbers for their niche, this page was getting very low user interaction. They barely got a handful (

Among the many changes we did, there was something new a member of our team came up with: linking node comments and forum posts. We noticed in GA that although having very little activity,forums got some attention, but just like it happens in a bar, if no one is dancing you are probably not going to be the first one. On the other hand, users commented on the sites contents and these comments got lost in the > 20,000 content nodes this sites has.

The idea was simple: for each comment thread in a node, there should be a forum post, and they must be syncronized (if someone comments on the forum it should appear on the node and vice versa).

All the magic can be easily implemented through hook_comment_insert:

function mymodule_comment_insert($comment) {
  // Crear un comentario de foro!
  $node = node_load($comment->nid);
  // Need a flag to prevent recursive behaviour.
  static $executed = FALSE;
  if (!$executed) {
    $executed = true;
    if ($node->type != 'forum') {
      // Buscar un tema de foro con el mismo título.
      $query = new EntityFieldQuery();
      $query->entityCondition('entity_type', 'node')
        ->entityCondition('bundle', 'forum')
        ->propertyCondition('status', NODE_PUBLISHED)
        ->propertyCondition('title', $node->title, '=')
        ->addMetaData('account', user_load(1)); // Run the query as user 1.

      $result = $query->execute();
      $forum = NULL;
      if (isset($result['node'])) {
        $news_items_nids = array_keys($result['node']);
        $forum = node_load(reset($news_items_nids));
        // Añadimos como comentario nuevo.
        $comment->nid = $forum->nid;
      else {
        $forum = new stdClass(); // Create a new node object
        $forum->type = "forum"; // Or page, or whatever content type you like
        node_object_prepare($forum); // Set some default values
        $forum->title = $node->title;
        $forum->language = $node->language; // Or e.g. 'en' if locale is enabled
        $forum->uid = $comment->uid; // UID of the author of the node; or use $node->name
        $value = "

Opinión sobre el artículo: " . $node->title . "

" . $node->field_entradilla[LANGUAGE_NONE][0]['value'] . "" . $comment->comment_body[LANGUAGE_NONE][0]['value']; $forum->body[$node->language][0]['value'] = $value; $forum->body[$node->language][0]['summary'] = ''; $forum->body[$node->language][0]['format'] = 'filtered_html'; $forum->taxonomy_forums[$node->language][0]['tid'] = 475; if($forum = node_submit($forum)) { // Prepare node for saving node_save($forum); } } } else { // Aplicamos a la inversa, el comentario del foro // lo pasamos al nodo para que las conversaciones // estén sincronizadas. // Buscar un tema de foro con el mismo título. $query = new EntityFieldQuery(); $query->entityCondition('entity_type', 'node') ->entityCondition('bundle', 'forum', '') ->propertyCondition('status', NODE_PUBLISHED) ->propertyCondition('title', $node->title, '=') ->addMetaData('account', user_load(1)); // Run the query as user 1. $result = $query->execute(); if (isset($result['node'])) { $news_items_nids = array_keys($result['node']); $forum = node_load(reset($news_items_nids)); unset($comment->cid); $comment->nid = $forum->nid; comment_submit($comment); comment_save($comment); } } } }

Along with this change we also made some very basic adjustments such as:

  • Allowing anonymous comments and remove the need for registering
  • Reducing the number of fields in their subscribe form from 5 to 2.
  • Adding subscribe pop-ups
  • Etc.

The result? Conversions (newsletter subscriptions in this case) were up from 1-2 per day to 25-50 in less than 2 weeks, and user activity in forums has been growing steadily day after day. 

Our customer was sad that he had lost a 1 year (since the site was re-launched using Drupal) in user conversions and engagament, but happy to have now found the right partner to make his project succeed.

Jan 19 2015
Jan 19


2015-01-21 (All day) America/New_York


The monthly security release window for Drupal 6 and Drupal 7 core will take place on Wednesday, January 21.

This does not mean that a Drupal core security release will necessarily take place on that date for either the Drupal 6 or Drupal 7 branches, only that you should prepare to look out for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

There will be no bug fix release on this date; the next window for a Drupal core bug fix release is Wednesday, February 4.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.