Dec 09 2017
Dec 09

We're in the midst of a Commerce 2 build-out for a client, and a key requirement was to preserve their quantity pricing rules. With thousands of products, and different pricing rules for each one, they need the price for each item in the cart adjusted to the appropriate price for the quantity purchased. When validating our plan for Drupal Commerce 2, we stumbled upon some examples of a custom price resolver, and knew this would work perfectly for the need.

Planning out the functionality

This past week was time for the implementation, and in addition to creating the new Price Resolver, we had to decide where to store the pricing data. The previous solution used a custom SQL lookup table, which they kept up-to-date from their back office Sage accounting system through an hourly CSV import.

We could, at worst, follow the same pattern -- query a table using the product SKU. But then we would have to maintain that table ourselves, have a bunch more custom code in place. Given that we were already using Drupal 8's awesome Migrate module to populate the site, we looked further for how we might best implement this using standard Drupal practices.

Commerce 2 has two levels of entities representing products -- the "Product" entity with the text, photos, description, title, and the "Product Variation" which contains the SKU, the price, and specific attribute values. Both use the standard Drupal Field API, so we can easily add fields to either. Why not add a multi-value field for the price breaks?

However, each price break does have two components: a "Threshold" value for maximum quantity valid for a price, and the price itself. We don't want something heavy like paragraphs or an entity reference here -- creating a custom field type makes the most sense.

So our task list to accomplish this became the following:

  1. Create a custom field type for the price break, along with supporting field widget and formatter.
  2. Add an "unlimited" value instance of that field to the default product variation bundle.
  3. Create a custom price resolver that acts upon this field.
  4. Migrate the CSV price-break data into the price break fields (covered in a later blog post).

There's a couple tools that make this process go very quickly: Drupal Console, and PHPStorm. Drupal Console can quickly generate modules, plugins, and more, while PHPStorm is an incredibly smart IDE (development environment) that helps you write the code correctly the first time, with helpful type-aheads that show function header parameters, automatically add "use" statements as you go, and more.

Create a custom Quantity Pricebreak field type

First up, the field type. We need a module to put our custom Commerce functionality, so let's create one:

> drupal generate:module

 This project's code name is dte, so we created a module called "dte_commerce", with a dependency on commerce_price.

Next, generate the field type plugins:

> drupal generate:plugin:field

... Enter the newly created module and "ProductPriceBreakItem" for the field type plugin class name. Give it a reasonable name, id, and description. We made the widget class "ProductPriceBreakWidget" and the formatter "ProductPriceBreakFormatter".

When we were done, we had a directory structure that looked like this:

Product Price Break Field Type

The field type plugin is the first thing to get in place. Drupal Console creates this file for you, you just need to fill in the necessary methods. It actually creates more than you need -- you can strip out several of the methods, because we don't need a field settings form for this custom field type. (If we wanted to publish the module on, we might want to create settings for whether the threshold for a price is the bottom value or the top value of threshold, perhaps -- but for the time being we're keeping this as simple as possible).

So we only need these methods:

  • propertyDefinitions()
  • schema()
  • generateSampleValue()
  • isEmpty()

There are a lot of examples of creating custom fields around the web, but very few were clear on what types were available for propertyDefinitions and schema -- it took a bit of digging and experimenting to find types that worked.

We ended up with:

  public static function propertyDefinitions(FieldStorageDefinitionInterface $field_definition) {
    // Prevent early t() calls by using the TranslatableMarkup.
    $properties['threshold'] = DataDefinition::create('integer')
      ->setLabel(new TranslatableMarkup('Threshold'));
    $properties['price'] = DataDefinition::create('string')
      ->setLabel(new TranslatableMarkup('Price'));

    return $properties;

... two fields, threshold and price. For this client, quantity is always an integer. We might've chosen numeric for price, but PriceItem in commerce_price uses strings, so we figured we would follow suit.

We considered extending commerce_price's PriceItem field type, but then we would end up with more configuration to do. We thought of trying to use a PriceItem field in the definitions, but it seemed like we would need to handle the schema ourselves anyway, and just seemed more complicated than needed -- if there is a simple way to do this, I would love to hear about it -- please leave a comment below!

Next up, the Schema:

  public static function schema(FieldStorageDefinitionInterface $field_definition) {
    $schema = [
      'columns' => [
        'threshold' => [
          'type' => 'int',
        'price' => [
          'type' => 'numeric',
          'precision' => 19,
          'scale' => 6,

    return $schema;

Simple enough schema -- an integer, and the same numeric field definition that commerce_price uses.

  public static function generateSampleValue(FieldDefinitionInterface $field_definition) {
    $values['threshold'] = rand(1,999999);
    $values['price'] = rand(10,10000) / 100;
    return $values;

Our compound field has two different values now, so for tests or devel generate to work, we need to populate random values for those fields.

Finally, isEmpty():

  public function isEmpty() {
    $value = $this->get('threshold')->getValue();
    return $value === NULL || $value === "" || $value === 0;

... if this method returns TRUE, the field is considered empty and that instance will get removed.

That's pretty much it! With the field type created, and this module enabled, you can add the field anywhere you add fields to an entity type.

On to the Widget...

Product Price Break Field Widget

Each field type needs at least one widget that can be used for editing values. Our custom widget needs to provide the form for filling out the two values in the custom field type -- threshold and price. Drupal Console provides a bunch of other method boilerplate for settings forms, etc. but most of these are optional.

We really only need to implement a single method: formElement(). This method returns the Form API fields for the editing form, so it ends up being just as simple as the field type methods:

public function formElement(FieldItemListInterface $items, $delta, array $element, array &$form, FormStateInterface $form_state) {
  $element += [
    '#type' => 'fieldset',
  $element['threshold'] = [
    '#type' => 'number',
    '#title' => t('Threshold'),
    '#default_value' => isset($items[$delta]->threshold) ? $items[$delta]->threshold : NULL,
    '#size' => 10,
    '#placeholder' => $this->getSetting('placeholder'),
    '#min' => 0,
    '#step' => 1,

  $element['price'] = [
    '#type' => 'textfield',
    '#title' => t('Price'),
    '#default_value' => isset($items[$delta]->price) ? $items[$delta]->price : NULL,
    '#size' => 10,
    '#placeholder' => $this->getSetting('placeholder'),
    '#maxlength' => 20,
  return $element;

This ends up being a simple widget that looks like this on the product variation form:

To finish up our custom price break field type, we need a field formatter.

Product Price Break Field Formatter

The Formatter is responsible for displaying the field. The only needed method is viewElements(), which renders all instances of the field. In this method we call another method to render each one. For now, this is really basic and ugly -- we'll come back and theme it later.

Here's the code:

public function viewElements(FieldItemListInterface $items, $langcode) {
  $elements = [];

  foreach ($items as $delta => $item) {
    $elements[$delta] = ['#markup' => $this->viewValue($item)];

  return $elements;

protected function viewValue(FieldItemInterface $item) {
  return nl2br('Up to: '. Html::escape($item->threshold). ' Price: '. Html::escape($item->price));

 ... and that's it! We have our custom field built. It currently looks like this:

Price Breaks on product

Next we add it to the product variation...

Add a product price break field to the product variation type

  1. Under Commerce -> Configuration -> Products -> Product Variations, on the Default row's operations button, go to "Manage Fields".
  2. Click "Add Field".
  3. In the "Add a new field" dropdown, under General select "Product price break".
  4. Add the label, and verify the machine name -- we chose "Qty Price Breaks", as field_qty_price_breaks.
  5. In "Allowed number of values", select Unlimited.

That's it for the widget, data storage, and everything! We don't even need to go near the database. The custom price break is now available on product variations.

Create the Custom Price Resolver

Now comes the fun part, the Price Resolver. The price resolver is implemented as a tagged Drupal Service, so the quick way to generate it is:

> drupal generate:service

We called it QtyPriceResolver, and put it in dte_commerce/src/Resolvers/QtyPriceResolver.php.

A Drupal service is registered in a modules' .services.yml file, so here's what we ended up with in

    class: Drupal\dte_commerce\Resolvers\QtyPriceResolver
    arguments: []
      - { name: commerce_price.price_resolver, priority: 600 }

The crucial part here is the tag -- by registering the service as a "commerce_price.price_resolver", this service will get called any time the price of a product is evaluated. The Priority is used to determine which order resolvers get called -- the first one to return a price sets the price for that product, and any other resolvers get skipped.

Our service class needs to implement \Drupal\commerce_price\Resolver\PriceResolverInterface. There is one crucial method to implement: resolve(). Ours looks like this:

public function resolve(PurchasableEntityInterface $entity, $quantity, Context $context) {
  if (isset($entity->field_qty_price_breaks) && !$entity->field_qty_price_breaks->isEmpty()) {
    foreach ($entity->field_qty_price_breaks as $price_break) {
      if ($quantity <= $price_break->threshold) {
        if (!isset($current_pricebreak) || $current_pricebreak->threshold > $price_break->threshold) {
          $current_pricebreak = $price_break;
    if (isset($current_pricebreak)) {
      return new Price($current_pricebreak->price, 'USD');

The key things to notice here is that we have hard-coded this to the field machine name we created in the product variation -- field_qty_price_breaks. If we were writing this more generally, we would iterate through the fields of the product variation looking for a ProductPriceBreak field type -- this is one shortcut we took. The other thing to notice is the return value -- we need to either return a \Drupal\commerce_price\Price object, or void. Because we're not storing the currency code, we always return "USD" as the currency code.

And with that, we have fully implemented the custom quantity price discount system! All that's needed is the actual pricing data.

In the next article, we'll show a slick trick for migrating a set of pricing thresholds from a CSV into multiple price break fields...

Dec 08 2017
Dec 08

This blog has been quiet for the last year and a half, because I don’t like to announce things until I feel comfortable recommending them. Until today!

Since July 2016, API-First Drupal became my primary focus, because Dries felt this was one of the most important areas for Drupal’s future. Together with the community, I triaged the issue queue, and helped determine the most important bugs to fix and improvements to add. That’s how we ended up with REST: top priorities for Drupal … plan issues for each Drupal 8 minor:

If you want to see what’s going on, start following that last issue. Whenever there’s news, I post a new comment there.

But enough background. This blog post is not an update on the entire API-First Initiative, it’s about a particular milestone.

100% integration test coverage!

The biggest problem we encountered while working on rest.module, serialization.module and hal.module was unknown BC breaks . Because in case of a REST API, the HTTP response is the API. What is a bug fix for person X is a BC break for person Y. The existing test coverage was rather thin, and was often only testing “the happy path”: the simplest possible case. That’s why we would often accidentally introduce BC breaks.

Hence the clear need for really thorough functional (integration) test coverage, which was completed almost exactly a year ago. We added EntityResourceTestBase, which tests dozens of scenarios in a generic way, and used that to test the 9 entity types, that already had some REST test coverage, more thoroughly than before.

But we had to bring this to all entity types in Drupal core … and covering all 41 entity types in Drupal core was completed exactly a week ago!

The test coverage revealed bugs for almost every entity type. (Most of them are fixed by now.)

Tip: Subclass that base test class for your custom entity types, and easily get full REST test coverage — 41 examples available!

Guaranteed to remain at 100%

We added EntityResourceRestTestCoverageTest, which verifies that we have test coverage for all permutations of:

  • entity type
  • format: json + xml + hal_json
  • authentication: cookie + basic_auth + anon

It is now impossible to add new entity types without also adding solid REST test coverage!

If you forget that test coverage, you’ll find an ASCII-art llama talking to you:

Good people of #Drupal, I present unto you the greatest method of all time.

— webcsillag (@webchick) December 8, 2017

That is why we can finally say that Drupal is really API-First!

This of course doesn’t help only core’s REST module, it also helps the contributed JSON API and GraphQL modules: they’ll encounter far fewer bugs!


So many people have helped! In random order: rogierbom, alexpott, harings_rob, himanshu-dixit, webflo, tedbow, xjm, yoroy, timmillwood, gaurav.kapoor, Gábor Hojtsy, brentschuddinck, Sam152, seanB, Berdir, larowlan, Yogesh Pawar, jibran, catch, sumanthkumarc, amateescu, andypost, dawehner, naveenvalecha, tstoeckler — thank you all!

Special thanks to three people I omitted above, because they’re not well known in the Drupal community, and totally deserve the spotlight here, for their impressive contribution to making this happen:

That’s thirty contributors without whom this would not have happened!


What is going to be the next big milestone we hit? That’s impossible to say, because it depends on the chains of blocking issues that we encounter. It could be support for modifying and creating config entities, it could be support for translations, it could be that all major serialization gaps are fixed, it could be file uploads, or it could be ensuring all normalizers work in both rest.module & jsonapi.module

The future will tell, follow along!

Dec 08 2017
Dec 08

Starting a new 8 project with the LakeDrops template only takes less than 3 minutes. And in that short period of time you're note only getting the code base but also a number of extra benefits:

  • Pre-configured site with tested reference configuration which is a much better starting point than any of the available installation profiles.
  • Docker containers to host your environment throughout the full project's lifecycle.
  • and Console configured for the environment and ready to sync with your live site.
  • repository ready to be pushed upstream.
  • based custom theme with a configurable base theme ready to go with full .
  • Dorgflow included which lets you contribute to .org easily at any time.

In this we will be going through these steps:

  1. Go to the project template:
  2. Create your project
  3. Start your Docker containers
  4. Install the site
  5. Look around at the result

How does this project template benefit you? Well, you can either use it as is and fast-track your initial setup process for new projects with it. Or, you use this as an idea, fork it and move that in any direction you would like to take to tailor everything towards your own needs. However, if you wanted to contribute to the project to also benefit from future and maintenance, please get in touch or simply submit you merge requests. Which ever way to prefer, we honestly hope this approach helps you and your team to become more efficient and can focus more on the really important tasks for your clients.

Dec 08 2017
Dec 08
This blog post has been re-published from Aegir's blog and edited with permission from its author, Christopher Gervais.

My tenure with the Ægir Project only dates back about 7 or 8 years. I can’t speak first-hand about its inception and those early days. So, I’ll leave that to some of the previous core team members, many of whom are publishing blog posts of their own.

New look for

As part of the run-up to Ægir’s 10-year anniversary, we built a new site for the project, which we released today. It will hopefully do more justice to Ægir’s capabilities. In addition to the re-vamped design, we added a blog section, to make it easier for the core team to communicate with the community. So keep an eye on this space for more news in the coming weeks.

Ægir has come a long way in 10 years

When I first tried to use Ægir (way back at version 0.3), I couldn’t even get all the way through the installation. Luckily, I happened to live in Montréal, not too far from Koumbit, which, at the time, was a hub of Ægir development. I dropped in and introduced myself to Antoine Beaupré; then one of Ægir’s lead developers.

One of his first questions was how far into the installation process I’d reached. As it turns out, he frequently asked this question when approached about Ægir. It helped him gauge how serious poeple were about running it. Back then, you pretty much needed to be a sysadmin to effectively operate it.

A few months, Antoine had wrapped the installation scripts in Debian packaging, making installation a breeze. By that point I was hooked.

Free Software is a core value

Fast forward a couple years to DrupalCon Chicago. This was a time of upheaval in the Drupal community, as Drupal 7.0 was on the cusp of release, and Development Seed had announced their intention to leave the Drupal community altogether. This had far-reaching consequences for Ægir, since Development Seed had been the primary company sponsoring development, and employing project founder and lead developer Adrian Rossouw.

While in Chicago I met with Eric Gundersen, CEO of DevSeed, to talk about the future of Ægir. Whereas DevSeed had sold their flagship Drupal product, OpenAtrium, to another company, Eric was very clear that they wanted to pass on stewardship of Ægir to Koumbit, due in large part to our dedication to Free Software, and deep systems-level knowledge.

Since then Ægir has grown a lot. Here is one of the more interesting insights from OpenHub’s Ægir page:

Very large, active development team

Over the past twelve months, 35 developers contributed new code to Aegir Hosting System. This is one of the largest open-source teams in the world, and is in the top 2% of all project teams on Open Hub.

To help visualize all these contributions, we produced a video using Gource. It represents the efforts of no less than 136 developers over the past 10 years. It spans the various components of Aegir Core, along with “Golden” contrib and other major sub-systems.

Of course, many other community members have contributed in other ways over the year. These include (but aren’t limited to) filing bug reports and testing patches, improving our documentation, answering other users’ questions on IRC, and giving presentations at local meetups.

The Future

The core team has been discussing options for re-architecting Ægir to modernize the codebase and address some structural issues. In the past few months, this activity has heated up. In order to simultaneously ensure ongoing maintenance of our stable product, and to simplify innovation of future ones, the core team decided to divide responsibilities across 3 branch maintainers.

Herman van Rink (helmo) has taken up maintenance of our stable 3.x branch. He’s handled the majority of release engineering for the project for the past couple years, so we can be very confident in the ongoing stability and quality of Ægir 3.

Jon Pugh, on the other hand, has adopted the 4.x branch, with the primary goal of de-coupling Ægir from Drush. This is driven largely by upstream decisions (in both Drupal and Drush) that would make continuing with our current approach increasingly difficult. He has made significant progress on porting Provision (Ægir’s back-end) to Symfony. Keep an eye out for further news on that front.

For my part, I’m pursuing a more radical departure from our current architecture, re-writing Ægir from scratch atop Drupal 8 and Ansible with a full-featured (Celery/RabbitMQ) task queue in between. This promises to make Ægir significantly more flexible, which is being borne out in recent progress on the new system. While Ægir 5 will be a completely new code-base, the most of the workflows, security model and default interface will be familiar. Once it has proven itself, we can start pursuing other exciting options, like Kubernetes and OpenStack support.

So, with 10 years behind us, the future certainly looks Bryght*.

* Bryght was the company where Adrian Rossouw began work on “hostmaster” that would eventually become the Ægir Hosting System.

Dec 08 2017
Dec 08

Welcome to part five of our series, processing the results of the Amazee Agile Agency Survey. Previously I wrote about forming discovery and planning. This time let’s focus on team communication and process.

Team Communication

When it comes to ways how to communicate, the ones that got selected with the highest rating of “mostly practised” where “Written communication in tickets”, “Written communication via (i.e. Slack)” as well as “Group meetings for the entire team”. The options that most often got selected as “Not practised” where “Written communication in blog or wiki” and “Written communication in pull requests”.

How does your team communicateHow does your team communicate

For us at Amazee Labs Zurich, a variety of communication channels is essential. Regular 1-on-1 meetings between managers and their employees allow us to continuously talk about what’s important to either side and work on improvements. We communicate a lot via Slack where we have various team channels, channels together with clients related to projects, channels for work-related topics or just channels to talk about fun stuff. Each morning, we start with a short team stand-up for the entire company where we check in with each other, and that’s followed by a more in-depth standup for the Scrum teams where we talk about “What has been done, What will be done and What’s blocking us”. Written communication happens between the team and customers in Jira tickets. As part of our 4-eyes-principle peer review process, we also give feedback on code within pull requests that are used to ensure the quality of the code and train each other.


We talked about iteration length in part 1 of this series. Now let’s look into how much time we spend on which things.

How much time does each team member spend every day?

According to the survey, the majority of standups take 15 minutes, followed by 5 minutes and 10 minutes with a few ones taking up to 30 minutes.

This also reflects ours: we take 10 minutes for the company-wide stand up amongst 24 team members and another 15 minutes for the Scrum Team specific stand-ups.

How much time does he spend each week on the following?

For the review-phase, teams equally often selected 2 hours and 1 hour as the top-rated option followed closely by 30 minutes. 4 hours has been chosen by a few other teams, and the last one would be one day. For the retrospectives, the top-rated option was 30 minutes, followed by 1 hour. Much fewer teams take 2 hours or even up to 4 hours for the retrospective. For planning, we saw the most significant gap regarding top rated options: 30 minutes is followed by 4 hours and then 2 hours and 1 hours were selected.

In the teams I work with, we usually spend half a day doing sprint review, retrospective and planning altogether. Our reviews typically take 45 minutes, the retrospective about 1.5 hours and the planning another 30 minutes. We currently don’t do these meetings together with customers because the Scrum teams are stable teams that usually work for multiple customers. Instead, we do demos along with the clients individually outside of these meetings. Also, our plannings are quite fast because the team split up stories already in part of grooming sessions beforehand and we only estimate smaller tasks that don’t get split up later on as usually done in sprint planning 2.

Overall how much time do you spend?

When looking at how much time is being spent on Client work (billable, unbillable) and Internal work we got a good variety of results. The top-rated option for “Client work (billable)” was 50-75%, “Client work (unbillable)” was usually rated below 10% and “Internal work” defaulted to 10-25%. Our internal statistics match these options that have been voted by the industry most often.

I also asked about what is most important to you and your team when it comes to scheduling time? Providing value while keeping our tech debt in a reasonable place has been mentioned which is also true for us. Over the last year, we started introducing our global maintenance team which puts a dedicated focus on maintaining existing sites and keeping customer satisfaction high. By using a Kanban-approach there, we can prioritise timely critical bugs fixes when they are needed and work on maintenance-related tasks such as module updates in a coordinated way. We found it particularly helpful that the Scrum-teams are well connected with the maintenance-team to provide know-how transfer and domain-knowledge where needed.

Another one mentioned, “We still need a good time tracker.” At Amazee we bill by the hour that we work so accurate time tracking is a must. We do so by using Tempo Timesheets for Jira combined with the Toggl app.

How do you communicate and what processes do you follow? Please leave us a comment below. If you are interested in Agile Scrum training, don’t hesitate to contact us.

Stay tuned for the next post where we’ll look at defining work.

Dec 08 2017
Dec 08

Over 8 years have passed since there was a DrupalCamp in tropical Nicaragua. With the help of a diverse group of volunteers, sponsors, and university faculty staff, we held our second one. DrupalCamp Lagos y Volcanes ("Lakes & Volcanoes") was a great success with over 100 people attending in 2 days. It was a big undertaking so we followed giants' footsteps to prepare for our event. Lots of the ideas were taken from some of the organizers' experience while attending Drupal events. Others came from local free software communities who have organized events before us. Let me share what we did, how we did it, and what the results were.

Saturday group photo


In line with DrupalCon, we used the "Big Eight" social identifiers to define diversity and encourage everyone to have a chance to present. Among other statistics, we are pleased that 15% of the sessions and 33% of the trainings were presented by women. We would have liked higher percentages, but it was a good first step. Another related fact is that no speaker presented more than one session. We had the opportunity to learn from people with different backgrounds and expertise.

Ticket cost

BADCamp, Drupal's largest event outside of DrupalCons, is truly an inspiration when it comes to making affordable events. They are free! We got close. For $1 attendees had access to all the sessions and trainings, lunch both days, a t-shirt, and unlimited swag while supplies lasted. Of course, they also had the networking opportunities that are always present at Drupal events. Even though the camp was almost free, we wanted to give all interested people a chance to come and learn so we provided scholarships to many attendees.


The camp offered four types of scholarships:

  • Ticket cost: we would waive the $1 entry fee.
  • Transportation: we would cover any expense for someone to come from any part of the country.
  • Lodging: we would provide a room for people to stay overnight if they would come from afar.
  • Food: we would pay for meals during the two days of the camp.

About 40% of the people who attended did not pay the entry fee. We also had people traveling from differents parts of the country. Some stayed over. Others travelled back and forth each day. Everyone who requested a scholarship received it. It felt good to provide this type of opportunities and recipients were grateful for it.


As you can imagine, events like these need funding and we are extremely grateful to our sponsors:

These are people who attended from afar. Some were scholarship recipients. Others got educational memberships.

Session recordings

Although we worked hard to make it possible for interested people to attend, we knew that some would not be able to make it. In fact, having sessions recorded would make it possible for anyone who understands Spanish to benefit from what was presented at the camp.

We used Kevin Thull’s recommended kit to record sessions. My colleague Micky Metts donated the equipment and I did the recording. I had the opportunity to be at some camps that Kevin recorded this year and he was very kind in teaching me how to use the equipment. Unfortunately, the audio is not clear in some sessions and I completely lost one. I have learned from the mistakes and next time it should be better. Check out the camp playlist in Drupal Nicaragua’s YouTube channel for the recordings.

Thank you Kevin. It was through session recordings that I improved my skills when I could not afford to travel to events. I’m sure I am not the only one. Your contributions to the Drupal community are invaluable!

Sprints and live commit!

Lucas Hedding lead a sprint on Saturday morning. Most sprinters were people who had never worked with Drupal before the camp. They learned how to contribute to Drupal and worked on a few patches. One pleasant surprise was when Lucas went on stage with one of the sprinters and proceeded with the live commit ceremony. I was overjoyed that even with a short sprint an attendee’s contribution was committed. Congrats to Jorge Morales for getting a patch committed on his first sprint! And thanks to Holger Lopez, Edys Meza, and Lucas Hedding for mentoring and working on the patch.


Northern Lights DrupalCamp decided to change the (physical) swag for experiences. What we lived was epic! For our camp, we went for a low cost swag. The only thing we had to pay for was t-shirts. Other local communities recommended us to have them and so we did. The rest was a buffet of the things I have collected since my first DrupalCon, Austin 2014: stickers, pins, temporary tattoos. It was funny trying to explain where I had collected each item. I could not remember them all, but it was nice to bring back those memories. We also had hand sanitizer and notebooks provided by local communities. Can you spot your organization/camp/module/theme logo on our swag table?

Free software communities

We were very lucky to have the support of different local communities. We learned a lot from their experiences organizing events. They also sent an army of volunteers and took the microphone to present on different subjects. A special thank you to the WordPress Nicaragua community who helped us immensely before, during, and after the event. It showed that when communities work together, we make a bigger impact.

Keeping momentum

Two weeks after the camp, we held two Global Training Days workshops. More than 20 people attended. I felt honored when some attendees shared that they had travelled from distant places to participate. One person travelled almost 8 hours. But more than distance, it was their enthusiasm and engagement during the workshops that inspired us. The last month has been very exhausting, but the local community is thrilled with the result.

A blooming community

The community has come a long way since I got involved in 2011. We have had highs and lows. Since Lucas and myself kickstarted the Global Training Days workshops in 2014 we have seen more interest in Drupal. By the way, this edition marked our third anniversary facilitating the workshop! But despite all efforts, people would not stay engaged for long after initially interacting with the community. Things have changed.

In the last year interest in Drupal has increased. We have organized more events and more people have attended. Universities and other organizations are approaching us requesting trainings. And what makes me smile most… the number of volunteers is at its all-time peak. In the last month alone, the number of volunteers have almost doubled. The DrupalCamp and the Global Training Days workshops contributed a lot to this.

We recognize that the job is far from complete and we already have plans for 2018. One of the things that we need to do is find job opportunities. Even if people enjoy working with Drupal they need to make a living. If you are an organization looking for talent consider Nicaragua. We have very great developers. Feel free to get in touch to put you in contact with them.

A personal thank you

I would like to take this opportunity to say thanks to Felix Delattre. He started the Drupal community in Nicaragua almost a decade ago. He was my mentor. He gave me my first Drupal gig. At a time when there was virtually no demand for Drupal talent in my country, that project helped me realize that I could make a living working with Drupal. But most importantly, Felix taught me the value of participating in the community. I remember creating my account after he suggested it in a local meetup.

His efforts had a profound effect on the lives of many, even beyond the borders of my country or those of a single project. Felix was instrumental in the development of local communities across Central and South America. He also started the OpenStreetMap (OSM) community in Nicaragua. I still find it impressive how OSM Nicaragua have mapped so many places and routes. In some cities, their maps are more accurate and complete than those of large Internet corporations. Thank you Felix for all you did for us!

Sunday group photo

We hope to have you in 2018!

The land of lakes and volcanoes awaits you next year. Nicaragua has a lot to offer and a DrupalCamp can be the perfect excuse to visit. ;-) Active volcanoes, beaches to surf, forests rich in flora and fauna are some of the charms of this tropical paradise.

Let’s focus on volcanoes for a moment. Check out this website for a sneak peek into one of our active volcanoes. That is Masaya, where you can walk to the border of the crater and see the flow of lava. Active volcanoes, dormant volcanoes, volcanoes around a lake, volcanoes in the middle of a lake, lagoons on top of volcanoes, volcanoes where you can “surf” down the slope... you name it, we have it.

We would love to have you in 2018!

In this album there will be more photos of the event.

Dec 07 2017
Dec 07

If you need an open-source solution for hosting and managing Drupal sites, there's only one option: the Aegir Hosting System. While it's possible to find a company that will host Drupal sites for you, Aegir helps you maintain control whether you want to use your own infrastructure or manage your own software-as-a-service (SaaS) product. Plus, you get all the benefits of open source.

Aegir turns ten (10) today. The first commit occurred on December 7th, 2007. We've actually produced a timeline including all major historical events. While Aegir had a slow uptake (the usability wasn't great in the early days), it's now being used by all kinds of organizations, including NASA.

I got involved in the project a couple of years ago, when I needed a hosting solution for a project I was working on. I started by improving the documentation, working on contributed modules, and then eventually the core system. I've been using it ever since for all of my SaaS projects, and have been taking the lead on Drupal 8 e-commerce integration. I became a core maintainer of the project about a year and a half ago.

So what's new with the project? We've got several initiatives on the go. While Aegir 3 is stable and usable now (Download it!), we've started moving away from Drush, which traditionally handles the heavy lifting (see Provision: Drupal 8.4 support for details), and into a couple of different directions. We've got an Aegir 4 branch based on Symfony, which is also included in Drupal core. This is intended to be a medium-term solution until Aegir 5 (codenamed AegirNG), a complete rewrite for hosting any application, is ready. Neither of these initiatives are stable yet, but development is on-going. Feel free to peruse the AegirNG architecture document, which is publicly available.

Please watch this space for future articles on the subject. I plan on writing about the following Aegir-related topics:

  • Managing your development workflow across Aegir environments
  • Automatic HTTPS-enabled sites with Aegir
  • Remote site management with Aegir Services
  • Preventing clients from changing Aegir site configurations

Happy Birthday Aegir! It's been a great ten years.

This article, Aegir: Your open-source hosting platform for Drupal sites, appeared first on the Colan Schwartz Consulting Services blog.

Dec 07 2017
Dec 07

bookclub pic

Last summer, whilst I was on vacation, my staff plotted against me. It was a sneaky, devious plot intended to subterfuge my authority. They started a BOOK CLUB.

Now, as I absorbed the horror of this colossal waste of company time, I was relieved to hear that we would not be reading the latest masterpiece from Nicholas Sparks, but rather books somewhat related to our work as a web design and development firm.

We would get together weekly for an hour to discuss. Attendance would be optional, but greatly prefered to get a wide range of perspectives from different parts of the organization. We would also have a designated facilitator to help guide and also make sure some of our remote employees could get a word in edgewise.

Our first book, Design for Real Life, served as a great kickoff - underscoring the need to ensure empathy in all of your design. The book sparked conversation that extended past  just traditional web design and elicited discussions ranging from customer service to understanding your co-workers.

Since that time, we have consistently met weekly and to-date have read:

I will say, although on the surface these titles tend to be pretty tech-heavy, I was pleasantly surprised to find out that, although we covered the content of the books, they really served as a framework to have discussions about our own organization as whole. In essence, instead of just having a weekly meeting about “Change” and being forced to construct some kind of agenda for each, the books served AS that agenda and helped us to think about things we haven’t considered. The content and discussions also helped foster a shared understanding of how we could put some of the lessons learned into practice.

Obviously, it is important to get everyone involved and make sure all feel that they can speak and share thoughts. Feel free to check out the books we have already read. All were very well received and you don’t need to be a development or design agency to appreciate them. Happy reading!

Dec 07 2017
Dec 07

Version control systems manage change in your application's code base, but what about changes to the database? Managing changes to the database between environments is challenging. Luckily for us, Drupal 8 has introduced the Configuration Management System, which aims to solve this problem for site admins and developers alike.

In this video session we will start from the beginning by exploring what it means to manage configuration and why you would want to do so. Then we'll take a deep dive into the different components of the configuration management system in Drupal 8 and how these differ from what we had in Drupal 7. Finally we'll wrap up with some actionable tips on how to get the most out of the configuration management system on your next project, as well as an overview of the contributed module space around configuration management. By the end of this learning, you will have a solid foundation for understanding and fully leveraging the configuration management system on your next Drupal 8 project. If after that, you’re interested in learning even more, A Practical Guide to Configuration Management in Drupal 8 is a great place to check out more on configuration management in Drupal 8.

Key Concepts:

  • What is configuration management and why is it important
  • Configuration management changes between D7 and D8
  • How does the configuration management system work in Drupal 8
  • Practical tips and tricks for managing configuration on a real project
  • Overview of the contributed module space around configuration management
  • Configuration Management: Tips, Tricks & Perspective

    [embedded content]

    If you liked this article, you might also like:

Dec 07 2017
Dec 07

Matt and Mike talk with Webform 8.5.x creator Jacob Rockowitz, #D8Rules initiative member Josef Dabernig, and WordPress (and former Drupal) developer Chris Wiegman about keeping Drupal's contrib ecosystem sustainable by enabling module creators to benefit financially from their development.

Dec 07 2017
Dec 07
Some module require that you download external Javascript-libraries, and in drupal 8 that should be done in Composer.  The module Masonry require the JavaScript library with the same name. So we need to include the package in composer.json, like:   "repositories":[ { "type":"composer", "url":"" }, { "type": "package", "package": { "name": "desandro/masonry", "version": "master", "type": "drupal-library", "dist": { "url": "", "type": "file" } } }, And in the require part that is: "require":{ ... "desandro/masonry":"master",   ... }, And then we need to add libraries in extra part of we do not have that: "extra":{ ... "web/libraries/{$name}":[ "type:drupal-library… Read More
Dec 07 2017
Dec 07

Ten years ago today Adrian Rossouw committed the first code for the Aegir project. ComputerMinds have been involved in Aegir for many of those years, particularly one of our senior developers: Steven Jones. We asked him some questions about it to mark the occasion.

When did you get involved in the Aegir project?

I went to a session at Drupalcon Washington DC in March 2009 and Adrian was presenting this mad thing: Drupal managing other Drupal sites. At ComputerMinds we were looking for something to help manage our Drupal hosting and help update sites and generally make things reliable. Aegir seemed like it might work, but after having a play with the 0.1 version it didn’t seem to actually make anything easier!

About a year later I installed it again, and it seemed much more mature and usable at that point. It was working well and we managed and many other sites via Aegir.

I continued to make changes and additions to Aegir and in March 2011 I was added as an Aegir core maintainer.

What has Aegir allowed you to do that you wouldn’t have been able to do otherwise?

We landed a project to build a platform for a UK political party that would host almost all of their websites, basically a site for every one of their parliamentary candidates, and suddenly we needed to be able to create 300+ sites, run updates on them and generally keep them working.

We knew that Aegir would be able to handle managing that number of sites, but we needed a quick and easy way to create 300 sites. Because Aegir was ‘just’ Drupal and sites were represented as nodes, we wrote a simple integration with feeds module: Aegir feeds and imported a CSV of details about the sites into an Aegir instance. A few moments later we had 300 sites in Aegir that it queued up and installed.

If we’d have written our own hosting system, then there’s no way we’d have had the reliability that Aegir offered, catching the odd site where upgrades failed etc. Aegir affords a Drupal developer like me a wonderful ease of extending the functionality, because, deep down, Aegir is ‘just’ Drupal.

What parts of the Aegir project have been particularly valuable/useful to you?

Aegir has had weekly scrum sessions on IRC for as long as I’ve known it, mostly for the maintainers to talk about what they’re doing and plan. They were a great way to get a feel for what the project was about and to get to know the people involved.

Mig5’s contributions, especially around documentation, have been incredibly valuable and even inspirational: topics like continuous deployment were not just dreams but mig5 laid out a clear path to achieve that with Aegir.

Aegir forced a shift in thinking, about what exactly your Drupal site was, what an instance of it was, what parts of your codebase did what and so on.

It’s been fun to do the open source thing and take code from one project and use it in Aegir. For example, I was always bothered that I had to wait for up to 1 minute for tasks to run to create a site, or migrate/update a site. I spotted that was using the waiting queue project to run tasks off a queue as quickly as possible, so I nabbed the code from there, wrote about 10 extra lines and Hosting queue runner was created. Tasks then executed after a delay of at most 1 second.

Aegir has always been open to contribution and contributors are welcomed personally, being helped generously along the way.

How has your use of Aegir changed since you started using it?

We’ve moved away from using Aegir to host sites for most of our projects, but are still using it to manage over 1,000 sites. It's still essential for managing 'platform' projects that cover hundreds of sites.

It’s hard to pick an exact reason of why we moved away from it, but mainly we’ve reverted to some simpler upgrade scripts for doing site deployments, using tools like Jenkins.

We’ve compromised on the ‘site safety’ aspects of deployments, we no longer have a pretty web UI for things and we now lean heavily on the technical expertise of our developers, but we have gained significant speed and comprehensibility of deployments etc.

What Aegir does during a Migrate task is a little opaque, whereas having a bash script with 20 lines of code is easier to understand.

Aegir is still awesome when you want to change how something works about the hosting environment over lots and lots of site like:

  • Configurable PHP versions per site
  • Lets Encrypt support
  • Writing custom config for Redis integrations

What do you think the Aegir project could look like, 10 years from now?

Aegir will become ‘headless’ and decentralised, having lots of instances working together to record, share and manage config. The Drupal UI will be a lightweight frontend to manage the Aegir cluster.

Alternatively, Aegir will become some sort of AI and start provisioning political cat gif websites for its own diabolical ends!

Dec 06 2017
Dec 06

We had a scenario where client runs a cluster of events, and folk sign up for these, and usually the registrants signs up for all events, but then they might invite mum to the Dinner, and brother John to the Talk, etc etc.

We wanted to achieve this on a single form with a single payment. We explored both CiviCart and Drupal Commerce but in the end concluded we could achieve this in a much lighter way with good old webforms.

The outcome is that up to 6 people can be registered for any combination of events, eg 

  • c1 registers for Events A, B, C, D, E and F
  • c2 registers for B, C and D
  • c3 registers for A and B
  • c4 registers for A and F
  • etc

To see the full gory details of the conditionals approach we took, please read the full blog on Fuzion's site.

Dec 06 2017
Dec 06

by David Snopek on December 6, 2017 - 2:37pm

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Critical security release for the Mailhandler module to fix a Remote Code Execution (RCE) vulnerability.

Remote Code Execution vulnerabilities are scary - it basically means that an attacker can run arbitrary code on your site. However, there a number of mitigating factors in this case, so, it's recommended to read the security advisory for Drupal 7.

With the help of the D6LTS vendors, a new version was released for Drupal 6 as well.

You can also download the patch the patch.

If you have a Drupal 6 site using the Mailhandler module, we recommend you update immediately! We have already deployed the patch for all of our Drupal 6 Long-Term Support clients. :-)

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on

Dec 06 2017
Dec 06

A Lifecycle Approach to Configuration Workflows will show watchers how Drupal empowers users to build complex site structures and relationships directly in the administrative user interface. In Drupal 7, the Features module was repurposed to manage site configuration deployments across environments. This developer-oriented workflow is now built into Drupal 8 core. But for small to medium-scale projects, strict configuration-based deployment workflows can be cumbersome and require substantial development expertise. So, how do we ensure the benefits of versioned config management after the original developers have moved on to greener pastures?

Fortunately, the configuration system is flexible and supports a wide range of workflows. A database UI-driven workflow is well-suited to rapid site prototyping and content modeling. A config-git-driven workflow pays dividends when scaling up the development team and performing automated tests. A live production site has a wide range of options for balancing rapid delivery with version control. Identifying and communicating the phase of work in a project’s life cycle and shifting workflows accordingly will improve development productivity and increase the diversity of contributors on a project.

Key Concepts:

  • History of configuration management

  • Competitive pressure within the CMS industry

  • Enterprises and Drupal

  • Configuration management in version control

  • Tricycle manifesto: build - prototype - build - maintain

  • Best practices

A Lifecycle Approach to Configuration Workflows

[embedded content]
Dec 06 2017
Dec 06

The Rise of Assistants

In last couple of years we have seen the rise of assistants, AI is enabling our lives more and more and with help of devices like Google Home and Amazon Echo, its now entering our living rooms and changing how we interact with technology. Though Assistants have been around for couple of years through android google home app, the UX is changing rapidly with home devices where now we are experiencing Conversational UI i.e. being able to talk to devices, no more typing/searching, you can now converse with your device and book a cab or play your favourite music. Though the verdict on home devices like Echo and Google home is pending, the underlying technology i.e. AI based assistants are here to stay.

In this post, we will explore Google Assistant Developer framework and how we can integrate it with Drupal.

Google Assistant Overview

Google Assistant works with help of Apps that define actions which in turn invokes operations to be performed on our product and services. These apps are registered with Actions on Google, which basically is a platform comprising of Apps and hence connecting different products and services via Apps. Unlike traditional mobile or desktop apps, users interact with Assistant apps through a conversation, natural-sounding back and forth exchanges (voice or text) and not traditional Click and Touch paradigms. 

The first step in the flow is understanding use requests through actions, so lets learn more about it. 

How Action on Google works with the Assistant?

It is very important to understand how actually actions on Google work with the assistant to have an overview of the workflow. From the development perspective, it's crucial we understand the whole of the Google Assistant and Google Action model in total, so that extending the same becomes easier.


Actions on Google


It all starts with User requesting an action, followed by Google Assistant invoking best corresponding APP using Actions on Google. Now, it's the duty of Actions on Google to contact APP by sending a request. The app must be prepared to handle the request, perform the corresponding action and send a valid response to the Actions on Google which is then passed to Google Assistant. Google Assistant renders the response in its UI and displays it to the user and conversation begins.

Lets build our own action, following tools are required:

  • Ngrok - Local web server supporting HTTPS. 
  • Editor - Sublime/PHPStorm
  • Google Pixel 2 - Just kidding! Although you can order 1 for me :p
  • Bit of patience and 100% attention


Very first step now is building our Actions on Google APP. Google provides 3 ways to accomplish this:

  1. With Templates
  2. With Dialogflow
  3. With Actions SDK

Main purpose of this app would be matching user request with an action. For now, we would be going with Dialogflow (for beginner convenience). To develop with Dialogflow, we first need to create an Actions on Google developer project and a Dialogflow agent. Having a project allows us to access the developer console to manage and distribute our app.

  1. Go to the Actions on Google Developer Console.
  2. Click on Add Project, enter YourAppName for the project name, and click Create Project.
  3. In the Overview screen, click BUILD on the Dialogflow card and then CREATE ACTIONS ON Dialogflow to start building actions.
  4. The Dialogflow console appears with information automatically populated in an agent. Click Save to save the agent.

Post saving an agent, we start improving/developing our agent. We can consider this step as training of our newly created Agent via some training data set. These structured training data sets referred here are intents. An individual Intent comprises of query patterns that a user may ask to perform an action, events and actions associated with this particular intent which together define a purpose user want to fulfill. So, every task user wants Assistant to perform is actually mapped with an intent. Events and Actions can be considered as a definitive representation of the actual associated event and task that needs to be performed which will be used by our products and services to understand what the end user is asking for.

So, here we define all the intents that define our app. Let's start with creating an intent to do cache rebuild.

  1. Create a new intent with name CACHE-REBUILD.
  2. We need to add query patterns we can think of, that user might say to invoke this intent. (Query Patterns may content parameters too, we will cover this later.)
  3. Add event cache-rebuild.
  4. Save the intent.
Intent Google Actions

For now, this is enough to just understand the flow, we will focus on entities and other aspects later. To verify if the intent you have created gets invoked if user says “do cache rebuild”, use “Try it now” present in the right side of the Dialogflow window.


After we are done with defining action in dialogflow, we now need to prepare our product (Drupal App) to fulfill the user request. So, basically after understanding user request and matching that with an intent and action Actions on Google is now going to invoke our Drupal App in one or the other way . This is accomplished using WEBHOOKS. So, Google is now going to send a post request with all the details. Under Fulfillment tab, we configure our webhook. We need to ensure that our web service fulfills webhook requirements.

According to this, the web service must use HTTPS and the URL must be publicly accessible and hence we need to install NGROK. Ngrok exposes local web server to the internet.


After having a publicly accessible URL, we just need to add this URL under fulfillment tab. As this URL will receive post request and processing will be done thereafter, so we need to add that URL where we are gonna handle requests just like endpoints. (It may be like

add webhook url

Now, we need to build corresponding fulfillment to process the intent.

OK! It seems simple we just need to create a custom module with a route and a controller to handle the request. Indeed it is simple, only important point is understanding the flow which we understood above.

So, why are we waiting? Let’s start.

Create a custom module and a routing file:

 path: '/google-assistant-request'
   _controller: '\Drupal\droogle\Controller\DefaultController::handleRequest'
   _title: 'Handle Request'
   _access: TRUE

Now, let’s add the corresponding controller


namespace Drupal\droogle\Controller;

use Drupal\Core\Controller\ControllerBase;
use Drupal\Core\Logger\LoggerChannelFactoryInterface;
use Symfony\Component\DependencyInjection\ContainerInterface;
use Symfony\Component\HttpFoundation\JsonResponse;
use Symfony\Component\HttpFoundation\RequestStack;

* Class DefaultController.
class DefaultController extends ControllerBase {

  * Symfony\Component\HttpFoundation\RequestStack definition.
  * @var \Symfony\Component\HttpFoundation\RequestStack
 protected $requestStack;

  * The logger factory.
  * @var \Drupal\Core\Logger\LoggerChannelFactoryInterface
 protected $loggerFactory;

  * Constructs a new DefaultController object.
 public function __construct(RequestStack $request_stack, LoggerChannelFactoryInterface $loggerFactory) {
   $this->requestStack = $request_stack;
   $this->loggerFactory = $loggerFactory;

  * {@inheritdoc}
 public static function create(ContainerInterface $container) {
   return new static(

  * Handlerequest.
  * @return mixed
  *   Return Hello string.
 public function handleRequest() {
   $this->loggerFactory->get('droogle')->info('droogle triggered');
   $data = [
     'speech' => 'Cache Rebuild Completed for the Site',
     'displayText' => 'Cache Rebuild Completed',
     'data' => '',
     'contextOut' => [],
     'source' => 'uniworld',
   return JsonResponse::create($data, 200);

 protected function processRequest() {
   $params = $this->requestStack->getCurrentRequest();
   // Here we will process the request to get intent

   // and fulfill the action.

Done! We are ready with a request handler to process the request that will be made by Google Assistant.



Part of the deployment has already been done, as we are developing on our local only. Now, we need to enable our custom module. Post that let's get back to dialogflow and establish the connection with app to test this. Earlier we had configured fulfillment URL details, ensure we have enabled webhook for all domains.



Let’s get back to intent that we build and enable webhook there too and save the intent.

intent enable webhook

Now, to test this we need to integrate it any of the device or live/sandbox app. Under Integrations tab, google provides several options for this too. Enable for Web Demo and open the URL in new tab, to test this:

Integration Web Demo

Speak up and test your newly build APP and let Google Assistant do its work.

So, as seen in the screenshot, there can be 2 type of responses. First, where our server is not able to handle request properly and the second one where Drupal server sends a valid JSON response.

GREAT! Connection is now established, you can now add intents in Google Action APP and correspondingly handle that intent and action at Drupal End. This is just a taste, conversational UX and Assistant technology will definitely impact how we interact with technology and we believe Drupal has a great role to play as a robust backend.

Dec 06 2017
Dec 06
In last couple of years we have seen the rise of assistants, AI is enabling our lives more and more and with help of devices like Google Home and Amazon Echo, its now entering our living rooms and changing how we interact with technology.
Dec 06 2017
Dec 06

Global e-commerce sales topped 1 trillion US dollars in 2012 for the first time in history. Industry estimates projected that sales will reach 4 trillion in 2020. As more enterprises conduct their core businesses on the Internet, Drupal has evolved from being a pure content management system to a full-fledged e-commerce site-builder. While e-commerce is not (yet) part of Drupal's core, support for it comes in the form of contributed modules.


A quick search on for stable, actively developed e-commerce modules generated 330 hits. Many such modules are optional for your online storefront. For example, AdSense, Affiliate Store, and Amazon Store are of no interest to you unless you want to monetize your website through advertising and affiliate marketing. Some modules such as Barcode are only relevant if your storefront requires that specific functionality.


In this post, we describe a set of 7 best-of-breed e-commerce Drupal modules which together implement the core functionalities of an online storefront. These modules focus on enterprise mission-critical operations that drive business results and have a direct impact on the bottom line.

So let's not keep you in suspense for too long and list e-commerce modules that we at Vardot think are essential for every online shop built with Drupal:

  1. Drupal Commerce vs Ubercart

  2. Commerce Recommender / Ubercart Recommender

  3. Commerce Upsell / UC Upsell

  4. Invoice

  5. Commerce Shipping

  6. Mailjet / MailChimp E-Commerce

  7. Currency

Now let’s discuss each of the modules in particular and see why it is so great.



Drupal Commerce vs Ubercart

Drupal Commerce vs Ubercart


As I mentioned before, e-commerce is not a built-in core feature of Drupal. The easiest way to add e-commerce functionalities to your website would be installing one of 2 competing Drupal modules: Drupal Commerce vs Ubercart. The 2 modules are often described as e-commerce ecosystems or frameworks which depend on third-party modules to make them feature-complete.

Drupal Commerce and Ubercart are both excellent e-commerce frameworks with their own active developer community. Ubercart is known for being easier to configure, and being more ready to deploy out-of-the-box. In contrast, Drupal Commerce is designed to be customizable and can scale up to support large enterprise e-commerce operations.

If you operate a small business with modest e-commerce requirements and a small I.T. budget, Ubercart is a good choice. Medium to large enterprises should consider Drupal Commerce because it is flexible enough to satisfy more complex requirements, and scalable enough to support future business growth. One caveat is that you need to possess technical expertise and be prepared to spend considerable time and resources to extend Drupal Commerce to do exactly what you want. You can find a more detailed comparison of Drupal Commerce with Ubercart in this article.



Commerce Recommender/Ubercart Recommender

Commerce Recommender / Ubercart Recommender


To optimize revenue growth in e-commerce, enterprises need to find ways to boost revenue per order. Cross selling and upselling are 2 key techniques to achieve revenue growth objectives. Commerce Recommender and Ubercart Recommender are 2 Drupal modules you should install to enable cross selling on the Drupal Commerce and Ubercart platforms, respectively.

Both modules make personalized recommendations for your web users. The recommendations are based on the user’s current order and any previous purchases. If the user is a new customer, the lack of a prior purchase history limits the recommendations that the software can make. In such a scenario, the cross selling module analyzes the purchase history of other users who previously bought the same product in the current order, and recommends products which these users also ordered in the past.



Commerce Upsell/UC Upsell

Best Drupal E-commerce modules: Commerce Upsell / UC Upsell


Upselling is different from cross selling in that the former entices the customer to upgrade to a more expensive product with a better profit margin, while the latter is about buying additional products such as an accessory. For upselling, Commerce Upsell and UC Upsell are the respective modules to install on the Drupal Commerce and Ubercart platforms.

The 2 modules allow site builders to define related products for upselling purposes. During a customer checkout, the software recommends product upgrades based on what products are in the shopping cart.




Best Drupal E-commerce modules: Invoice


Invoice is a Drupal module which generates sales invoices for your online business. You can customize the format as well as the content of your invoices using template files. After instantiating your sales invoices, you can view them online as well as output them in PDF or html format.



Commerce Shipping

Best Drupal E-commerce modules: Commerce Shipping


Your online customers can place their orders from any country in the world. Before they purchase your products, they want to know the shipping options and their associated cost. Commerce Shipping is a shipping rate calculator. It is designed as a shipping calculation platform which depends on third-party carrier-specific modules to provide the actual shipping rates. For instance, it supports UPS, FedEx and USPS through the modules Commerce UPS, Commerce FedEx, and Commerce USPS, respectively. Using rules, site administrators can configure which shipping services are available on a web store and how they are charged, including flat shipping rates.



Mailjet/MailChimp E-Commerce

Best Drupal E-commerce modules: MailChimp E-commerce, Mailjet


Despite the phenomenal growth in social media, email marketing remains an integral part of any online marketing plan. Marketing and sales campaigns are regularly conducted by sending email to people on subscription lists. The Mailjet module supports email marketing on Drupal Commerce. Alternatively, MailChimp E-Commerce supports both Drupal Commerce and Ubercart. One e-commerce best practice is to offload email sending to third-party cloud-based email service providers. Mailjet and MailChimp E-Commerce integrate with the Mailjet and MailChimp email service providers, respectively. To use either module, you need to first sign up with the respective company. The services are free if email volume is kept below a certain threshold. Both modules enable site administrators to create email campaigns, personalize the marketing message, and track campaign effectiveness.




Best Drupal E-commerce modules: Currency


E-commerce reels in online customers from the farthest countries of the earth, together with their different local currencies. The online store must be able to convert product prices from the enterprise’s own preferred currency to the local currency of each customer. In addition, the newly converted local amount must be presented in a format that conforms to the customer’s regional convention. Currency is a Drupal module that specializes in converting world currencies based on stored exchange rates. In addition, this module can automatically customize the display format of price information based on the locale of each online shopper.



Summary & Conclusion

E-commerce is the key to unlocking revenue generation potential of an enterprise Drupal website. Drupal provides excellent e-commerce modules under two main technology ecosystems, Drupal Commerce and Ubercart.

While integrating the right modules is critical to providing the necessary e-commerce functionalities, site builders also need to pay attention to other important factors such as SEO and site security. SEO will bring more visitors and potential customers to a website, and site security will protect them against hackers when they transact business online. For more information about essential Drupal modules, please refer to our earlier blog posts: 5 Security Modules for Every Drupal Website and 10 SEO Modules That Every Drupal Website Must Have.

The building of an e-commerce website, that is SEO-friendly and secure, requires expertise that may be beyond the capability of many enterprises. If you require professional Drupal assistance, please contact Vardot.

Dec 06 2017
Dec 06

Hearing the words “migration from Drupal to WordPress”, some Drupal developers would shrug their shoulders and WordPress developers would applaud. However, there is no place for rivalry, even for such life-long competition as that between Drupal and WordPress, where the most important result is an absolutely happy customer. For every case, there is a platform that fits a website like a glove. And if a customer for whatever reason feels the “glove” is not perfectly comfortable, maybe it’s time to go ahead and change it. Though, it will take, of course, a little longer :)

When it comes to choosing between Drupal and WordPress, migrations from WordPress to Drupal are more frequent. This is due to Drupal’s unlimited opportunities for various powerful features, fortress-level security, ability to handle more content and users, and so on.

Still, some customers wish to jump from Drupal to WordPress, seduced by the unmatched simplicity of interface, a gentle learning curve to start working straight away, many beautiful free themes, etc., and they just do not need what Drupal has to offer.

For those who are determined to move from Drupal to WordPress (and who have thought twice), we describe the opportunity to do so.

Drupal-to-WordPress migration via the FG Drupal to WordPress plugin

You can use the handy FG Drupal to WordPress plugin, which is ready for all Drupal and WordPress versions including the latest ones (8 and 4.9).

It migrates Drupal’s articles, basic pages, categories, tags, images, and more. It also resizes images according to WordPress settings, modifies URLs, preserves the ALT attributes for images, and does lots of other useful things.

The premium version promises to go even further and migrate custom taxonomies, fields, content types, users, comments, node relationships, as well as redirect URLs from Drupal to WordPress, accept user’s Drupal passwords, and so on.

To do a Drupal-to-WordPress migration with the FG, you will basically need to:

  • Install and enable the FG Drupal to WordPress plugin on the WP site.
  • Find the “import” option and enter your Drupal database parameters there.
  • Configure the FG’s content import options and perform the import.

Before the migration, stay safe and do not forget to backup your Drupal website, as well as your WordPress website (if you are migrating to an existing one, not a blank new WP installation).

Drupal to WordPress migrations from our experience

The CMSs are different, so custom content migration needs great care. For many cases, ready tools are not enough and you will need custom migration scripts for Drupal-to-WordPress migration. That’s what we faced when performing migrations from Drupal to WordPress for our customers.

Let’s see a little example — a website with Books and Quizz content types. We created the appropriate post types in WordPress with extra customization.

  • Books. In Drupal, relationships are provided by the Books module where every book page has its weight. In WordPress, we created a custom type of post and expressed relationships via additional fields.
  • Quizzes. We customized the look of quizzes in the WP Pro Quizz plugin by providing an XML file generation to import all test categories, test topics and issues. This included answers of two types: true-false and multichoice.

Final thoughts

So that all content moves safely and accurately to a new place, contact our WordPress team. We have both Drupal and WordPress developers in our company, so the process is bound to be smooth, with the smallest details, specific for both CMSs, taken into account. Enjoy your Drupal-to-WordPress migration!

Dec 06 2017
Dec 06

November said farewell from us and the autumn will follow soon as well. Winter is knocking at our door, bringing Holidays, full of gifts.  Before we start enjoying December’s atmosphere, let's look back at the best Drupal blog posts from November.

Let's begin with a blog by Tim Broeker from the Electric citizen with the title Drupal 8 DevOps: Automation for happier teams and clients. It talks about how and what benefits we can use with the solid DevOps strategy. It provides us with better projects and satisfied customers, without needing any major financial contribution.

A second spot is reserved for a blog post by Blair Wadman from BeFused: Moving from theming in Drupal 7 to Drupal 8? Overview of key changes in which he is exploring some of the key changes in Drupal 8. He is encouraging everybody to move to Drupal 8 because of its benefits, such as improved security and more elegant theming. 

Our third choice is Drupal Website Accessibility, Part 1: The problem, and why it matters… by Appnovation. In this blog post, it is explained what does web accessibility refers to and shows some great examples. 


Let’s continue with Smart Cropping of Media with Image Widget Crop Drupal Module by Jorge Montoya from OSTraining. In this blog post, you can see the tutorial of using Image Widget Crop module in conjunction with the new media features for images available in Drupal core. 

Our fifth choice is a blog post by Jan Pára from Morpht: Announcing Entity Class Formatter for Drupal 8. It is a module that gives us a lot of possibilities. Which one and how to use it, check the blog post. 

Ranking sixth is a blog post from Jigar Mehta from Evolving Web: Profiling and optimizing Drupal migrations with Blackfire. It gives us an insight of analyzing migrations with Blackfire, which saved them a lot of time. 

Seventh is Security in Code Deployment: Unknown Drupal codebase by Perttu Ehn from Exove. It is showing us what to do in case you have a scenario, where you have to review the entire codebase in a project you have to maintain. 

And we conclude our list with the blog post by Andy Mead from Elevated third: Marketing automation, meet Drupal. It is about a partnership between Drupal and Marketing Automation. He talks about how to use MA properly in Drupal to get the result we want, that is capturing latent demand and turn it into sales.

These are our top blogs from November … We will be collecting best Drupal blog post in December too. Stay tuned. 

Dec 06 2017
Dec 06

This question is for testing whether you are a human visitor and to prevent automated spam submissions.

        ___   _       ____          
___ |_ _| | | | __ ) _ __
/ __| | | | | | _ \ | '_ \
\__ \ | | | |___ | |_) | | | | |
|___/ |___| |_____| |____/ |_| |_|
Enter the code above *

Enter the code depicted in ASCII art style.

Dec 06 2017
Dec 06
Add Font Awesome Icons to Your Drupal Menus

Font Awesome icons use scalable vectors. You get a high-quality icons, that look good no matter the size of the screen.

The Drupal contrib module "Font Awesome Menu Icons" will help you to add and position the icons in your menu tabs. 

Let's start!

Step #1. Module Installation

In order to install the Font Awesome Menu Icons module, you have to meet certain dependencies. It requires the Font Awesome module and the Libraries module

  • For proper functioning of the module, download and extract two libraries into your /libraries/ folder:
  • You should have following structure in your [root]/libraries folder after downloading/unzipping/renaming your folders: 

The /libraries/ folder structure

 Step #2. Create Menu Items

For demonstration purposes in this tutorial, we’ll be using the Main navigation menu. You can apply this approach to all menus in Drupal. Even custom ones.

  • Click Structure > Menus > Main navigation
  • Click the Edit button next to the Home menu link

Edit menu Main Navigation

  • You’ll see two additional options in your menu Edit form:
    • FontAwesome Icon
    • FontAwesome Icon - Appearance

Two additional options

  • Click the first text box. You will see a tooltip with all FontAwesome icons to choose from. You can even filter them with the filter option on top.

Icons filter

  • Choose the icon of your liking or write the code according to the FontAwesome Cheatsheet
  • The second option allows you to choose whether you want the icon before/after the text or if you want no text at all. For this example, I’m going to place the icon after the text
  • Click Save.

Choose your icon appearance and click Save

  •  Now complete your menu structure if you haven’t done that yet and take a look at your menu. 

Final menu

You can tweak the size and color of your icons with CSS to fit the look you are trying to achieve.
I hope you liked reading this tutorial. Please leave your comments below. 

About the author

Jorge lived in Ecuador and Germany. Now he is back to his homeland Colombia. He spends his time translating from English and German to Spanish. He enjoys playing with Drupal and other Open Source Content Management Systems and technologies.
Dec 06 2017
Dec 06

When you look at a product online, you might think you're looking at a single product (say a T-shirt). But as far as an ecommerce site is concerned, you're really looking at a grouping of products, because that T-shirt comes in four different colors and three different sizes (4 x 3 = 12 products with individual SKUs). And that is just a basic product example. More options mean even more SKUs.

What does "in stock" mean?

If you show a catalog listing of a product (the T-shirt), and some of the variations (sizes) are in stock while others are out of stock, is the product itself in stock? Most of the time, yes. But it can be a grey area. If you only have XXL shirts left, that's kind of an out-of- stock item. If you were in a retail store, you'd likely dump those few shirts in a clearance bin. You're not going to advertise that you have all these shirts when in fact you only have one size.

Stock seems like a simple yes-we-have-it or no-we're-out kind of thing, but there's more to it than that. If you don't have it, when can you get it? Is it something that gets custom ordered anyway and people aren't going to care if they have to wait two or three or four weeks for it? Then it can always be in stock, because you can always get it. Is it a thing that if you don't have it today, having it three days from now is useless? Then you really don't have it in stock.

You need to decide on these kinds of things so you can configure your Drupal Commerce site appropriately. If you only have a couple of XXL shirts left, you could set them up as their own clearance product and sell them that way, for instance.

Blending with Drupal Commerce POS

When you integrate the Drupal Commerce POS system, those two XXL shirts are the only ones remaining for your in-store customers, so you never have to worry about orders going through that you can't fulfill. You do need to worry about irritating your customers, though—if they see a product on your site as in-stock and the go to your brick and mortar store only to realize you don't actually have it, they're going to get annoyed.

So with that in mind, you have to think about the messaging you present to your customers online. If something is out of stock but you can get it in three to five days, for instance, maybe you want to communicate that. Or if it's a one-off and you will never have it in stock again, you need to let your customers know.

Introducing transactional stock

Something new in Commerce 2 is the concept of transactional stock. So you don't just have a product in stock: you have two that have been purchased and are about to be sent out, you have six sitting in inventory, and you have five on order. And maybe you have a pending return that you can eventually sell, but not until the return is complete. As far as your fulfillment people are concerned, you only have six. But your customer service and inventory management people know about the ones that are coming, and can adapt accordingly.

TL:DR: Stock in Commerce 2 is transactional and flexible.

Chat with us

If you'd like to know more about Drupal Commerce 2, online stock management or anything else ecommerce related, give us a shout. We'd love to help you out.

Contact us! 

Dec 05 2017
Dec 05

Every day, new blog posts are rising ideas and distilling knowledge through Twitter, Medium or whatever, continuously improving our field. The internet is becoming such a great place to share that, nowadays, I can hardly believe that we managed to start web development without tools like StackOverflow.

During the past 10 years, probably just like you, I’ve absorbed about almost anything: methodologies, languages, weird and beautiful data structures, the same for algorithms, computer science history, frameworks and API's. I also had the chance to meet wonderful people, at work or events, with various level of experience.
I've almost avoided the impostor syndrome and gained experience in many ways: maintenance tasks, full stack, specialization, devops, analysis, project management, tough issues, training and even sales interim (!). But in the end… this feeling of missing the big picture was still there, knowing that there is more than that.

At a certain point comes this trivial question, what makes the difference between a junior and a senior developer?
A few answers can be found on StackExchange, Medium or Quora. But... still not there yet, what appeared to be the true answer for me came after talking and listening to other developers, and it is a single word: craftsmanship.

Craftsmanship is just doing things well with the help of various attitudes.

  • It all starts by taking care of what you do.
  • Do not work alone: help and accept to be helped.
  • Be critical.
  • Never limit to your current knowledge, know your field and know it well.
  • ...

Here is one of the session that can just change your life as a new developer. At least you will get a good reminder of how things should be.

So... I asked other developers where to find learning materials about software craftsmanship and it was the sound of a single voice.
You probably already know these books, but if you do not and still have room on your Santa Claus list, just add them at the top, really.

The Pragmatic Programmer

From Journeyman to Master
Andy Hunt and Dave Thomas, 1999

This book is almost 20 years old and still deserves its reputation. You will learn, among many others, the concepts of Kaizen, capturing real requirements, code flexibility, testing or personal responsibility.

I've also heard of companies offering this book to new collaborators.

The book does not present a systematic theory, but rather a collection of tips to improve the development process in a pragmatic way. The main qualities of what the authors refer as a pragmatic programmer are being an early adopter, to have fast adaption, inquisitiveness and critical thinking, realism, and being a jack-of-all-trades.

The book uses analogies and short stories to present development methodologies and caveats, for example the broken windows theory, the story of the stone soup, or the boiling frog. Some concepts were named or popularised in the book, like code katas, small exercises to practice programming skills, and rubber duck debugging, a method of debugging whose name is a reference to a story in the book.

If you do not have the time to read it now, you can still have a look at this summary.

The Clean Coder

A Code of Conduct for Professional Programmers
Robert C. Martin, 2011

Can be considered as a continuity of the first book. As its predecessor, I regard it as an humanistic approach of software development.

Introduces the disciplines, techniques, tools, and practices of true software craftsmanship. This book is packed with practical advice–about everything from estimating and coding to refactoring and testing. It covers much more than technique: It is about attitude. Martin shows how to approach software development with honor, self-respect, and pride; work well and work clean; communicate and estimate faithfully; face difficult decisions with clarity and honesty; and understand that deep knowledge comes with a responsibility to act.

Readers will learn

  • What it means to behave as a true software craftsman
  • How to deal with conflict, tight schedules, and unreasonable managers
  • How to get into the flow of coding, and get past writer’s block
  • How to handle unrelenting pressure and avoid burnout
  • How to combine enduring attitudes with new development paradigms
  • How to manage your time, and avoid blind alleys, marshes, bogs, and swamps
  • How to foster environments where programmers and teams can thrive
  • When to say “No”–and how to say it
  • When to say “Yes”–and what yes really means

Clean Code

A Handbook of Agile Software Craftsmanship
Robert C. Martin, 2008

A more practical approach with coding examples.

Even bad code can function. But if code isn’t clean, it can bring a development organization to its knees. Every year, countless hours and significant resources are lost because of poorly written code. But it doesn’t have to be that way.

What kind of work will you be doing? You’ll be reading code―lots of code. And you will be challenged to think about what’s right about that code, and what’s wrong with it. More importantly, you will be challenged to reassess your professional values and your commitment to your craft.

Readers will come away from this book understanding

  • How to tell the difference between good and bad code
  • How to write good code and how to transform bad code into good code
  • How to create good names, good functions, good objects, and good classes
  • How to format code for maximum readability
  • How to implement complete error handling without obscuring code logic
  • How to unit test and practice test-driven development

I wish I would have discovered them earlier, hence this post.
These books will give you a brand new perspective of your work. You will gain confidence, having a strong feeling of mastering your skill. You will become a better colleague and produce maintainable code.


Related article

Dec 05 2017
Dec 05

Symfony 4.0 stable has been released, and, while packed with many new and powerful features, still maintains many of the APIs provided in earlier versions. Just before the stable release, many participants at #SymfonyConHackday2017 submitted pull requests in projects all over the web, encouraging them to adopt Symfony 4 support. In many cases, these pull requests consisted of nothing more than the addition of a “|^4” in the version constraints of all of the Symfony dependencies listed in the composer.json file of the selected project, with no code changes needed. The fact that this works is quite a testament to the commitment to backwards compatibility shown by the Symfony team. However, adding Symfony 4 as an allowed version also has testing implications. Ideally, the project would run its tests both on Symfony 4, while continuing to test other supported versions of Symfony at the same time.

In this blog post, we’ll look at testing techniques for projects that need to test two different major versions of a dependency; after that, we will examine how to do the same thing when you need to test three major versions. Finally, we’ll present some generalized scripts that make dependency testing easier for two, three or more different combinations of dependant versions.

Sometimes, the best thing to do is to bump up to the next major version number, adopt Symfony 4, and leave support for earlier versions, if needed, on the older branches. That way, the tests on each branch will cover the Symfony version applicable for that branch. This is a good solution for PHP projects that are building applications. For projects that are libraries in use by other projects, though, it is better to provide a level of continuity between different versions of your dependencies on a single branch. This sort of flexibility can greatly relieve the frustration caused by the “dependency hell” that can arise when different projects cannot be used together because they have strict requirements on their own dependencies that clash. In the case of Libraries that use Symfony components, the best thing to do is to provide Symfony 4 support in the current active branch, and make a new branch that supports only Symfony 4 and later, when new features from the new release are used.

Current / Lowest / Highest Testing

A technique called lowest / current / highest testing is commonly used for projects that support two different major versions of their dependencies. In this testing scheme, you would ensure that Symfony 4 components appear in your composer.lock file. If you Symfony version constraint in your composer.json file was "^3.4|^4", then you could run composer update --prefer-lowest on Travis to bring in the Symfony 3 components.

The portion of your .travis.yml file to support this might look something like the following:

    - php: 7.2
      env: 'HIGHEST_LOWEST="update"'
    - php: 7.1
    - php: 7.0.11
      env: 'HIGHEST_LOWEST="update --prefer-lowest"'

  - 'composer -n ${HIGHEST_LOWEST-install} --prefer-dist'

In this example, we use the expression ${HIGHEST_LOWEST-install} to determine whether we are running a current, lowest or highest test; this simplifies our .travis.yml file by removing a few lines of conditionals. In the bash shell, the expression ${VARIABLE-default} will evaluate to the contents of $VARIABLE if it has been set, and will otherwise return the literal value "default". Therefore, if the HIGHEST_LOWEST environment variable is not set, the composer command shown above will run composer -n install --prefer-dist. This will install the dependencies recorded in our lock file. To run the lowest test, we simply define HIGHEST_LOWEST to be update --prefer-lowest, which will select the lowest version allowed in our composer.json file.

Highest/lowest testing with just two sets of dependencies is easy to set up and takes very little overhead; there is really no reason why you should not do it. Even for projects that support but a single major version of each their dependencies still benefit from highest/lowest testing, as these tests will catch problems the otherwise might accidentally creep into the code base. For example, if one of the project’s dependencies inadvertently introduces a bug that breaks backwards compatibility mid-release, or if an API not available in the lowest-advertised version of a dependency is used, that fact should be flagged by a failing test.

Supporting only two major versions of a dependency is sufficient in many instances. Symfony 2 is no longer supported, so maintaining tests for it is not strictly necessary. In some cases, though, you may wish to continue supporting older packages. If a project has traditionally supported both Symfony 2 and Symfony 3, then support for Symfony 2 should probably be maintained until the next major version of the project. I have seen projects that drop support for obsolete versions of PHP or dependencies without creating a major version increase, but doing this can have a cascading effect on other projects, and should therefore be avoided. There are also some niche use cases for supporting older dependency versions. For example, Drush 8 continues to support obsolete versions of Drupal 8, which still depend on Symfony 2, to prevent problems for people who need to update an old website.

Extending to Test Three Versions

If you are in a position to support three major versions of a dependency in a project all in the same branch, then highest/lowest testing is still possible, but it gets a little more complicated. In the case of Symfony, what we will do is ensure that our lock file contains Symfony 3 components, and use the highest test to cover Symfony 4, and the lowest test to cover Symfony 2. Because Symfony 4 requires a minimum PHP version of 7.1, we can keep our Composer dependencies constrained to Symfony 3 by setting the PHP platform version to 7.0 or lower. We’ll use PHP 5.6, to keep other dependencies at a reasonable baseline.

"require": {
    "php": ">=5.6.0",
    "symfony/console": "^2.8|^3|^4",
    "symfony/finder": "^2.5|^3|^4"
"config": {
    "platform": {
        "php": "5.6"

There are a couple of implications to doing this that will impact our highest/lowest testing, though. For one thing, the platform PHP version constraint that we added will interfere with the composer update command’s ability to update our dependencies all the way to Symfony 4. We can remove it prior to updating via composer config --unset platform.php. This alone is not enough, though.

The .travis.yml tests then look like the following example:

    - php: 7.2
      env: 'HIGHEST_LOWEST="update"'
    - php: 7.1
    - php: 7.0.11
    - php: 5.6
      env: 'HIGHEST_LOWEST="update --prefer-lowest"'

  - |
    if [ -n "$HIGHEST_LOWEST" ] ; then
      composer config --unset platform.php
  - 'composer -n ${HIGHEST_LOWEST-install} --prefer-dist'

This isn’t a terrible solution, but it does add a little complexity to the test scripts, and poses some additional questions.

  • The test dependencies that are installed are being selected by sideeffects of the constraints of the project dependencies themselves. If the dependencies of our dependencies change, will our tests still be covering all of the dependencies we expect them to?
  • What if you want to test the highest configuration locally? This involves modifying the local copy of your composer.json and composer.lock files, which introduces the risk these might accidentally be committed.
  • What if you need to make other modifications to the composer.json in some test scenarios--for example, setting the minimum-stability to dev to test the latest HEAD of master for some dependencies?
  • What if you want to do current/highest testing on both Symfony 3.4 and Symfony 4 at the same time?

If you want to do current/highest testing for more than one set of dependency versions, then there is no alternative but to commit multiple composer.lock files. If each composer.lock has an associated composer.json file, that also solves the problem of managing different configuration settings for different test scenarios. The issue of testing these different scenarios is then simplified to to the matter of selecting the specific lock file for the test. There are two ways to do this in Composer:

  • Use the COMPOSER environment variable to specify the composer.json file to use. The composer lock file will be named similarly.
  • Use the --working-dir option to stipulate the directory where the composer.json and composer.lock file are located.

Using either of these techniques, it would be easy to keep multiple composer.json files, and install the right one with a single-line install: step in .travis.yml using environment variables from the test matrix. However, we really do not want to have to modify multiple composer.json files every time we make any change to our project’s main composer.json file. Also, having to remember to run composer update on multiple sets of Composer dependencies is an added step that we also could really do without. Fortunately, these steps can be easily automated using a Composer post-update-cmd script handler. It wouldn’t take too many lines of code to do this manually in a project’s composer.json and .travis.yml files, but we will make things even more streamlined by using the composer-test-scenarios project, as explained in the next section.

Using Composer Test Scenarios

You can add the composer-test-scenarios project to your composer.json file via:

composer require --dev greg-1-anderson/composer-test-scenarios:^1

Copy the scripts section from the example composer.json file from composer-test-scenarios. It contains some recommended steps to use for testing your project with Phpunit. Customize these steps as desired, and then modify the post-update-cmd handler to define the test scenarios you would like to test. Here is the example test scenarios defined in the example file:

"post-update-cmd": [
    "create-scenario symfony2 'symfony/console:^2.8' --platform-php '5.4' --no-lockfile",
    "create-scenario symfony3 'symfony/console:^3.0' --platform-php '5.6'",
    "create-scenario symfony4 'symfony/console:^4.0'"

These commands create “test scenarios” named symfony2, symfony3 and symfony4, respectively. As you can see, additional composer requirements are used to control the dependencies that will be selected for each scenario, which is an improvement over what we were doing before. There are also additional options for setting configuration values such as the platform PHP version. The --no-lockfile option may be used for test scenarios that are only used in lowest/highest scenarios, as only “current” tests need a composer.lock file. Once you have defined your scenarios, you no longer need to worry about maintaining the derived composer.json and composer.lock files, as they will be created for you on every run of composer update. The generated files are written into a directory called scenarios; commit the contents of this directory along with your composer.lock file.

The install step in our .travis.yml can now be done in a single line again, with the HIGHEST_LOWEST environment variable being defined the same way they were in the first example:

  - 'composer scenario "${SCENARIO}" "${HIGHEST_LOWEST-install}"'

The composer scenario command will run the scenario step in your composer.json file that you copied in on the previous section, above. This command will run composer install or composer update on the appropriate composer.json file generated by the post-update-cmd, or, if SCENARIO is not defined, then the project’s primary composer.json file will be installed.

In conclusion, the composer-test-scenarios project allows current / highest / lowest testing to be setup with minimal effort, and gives more control over managing the different test scenarios without complicating the test scripts. If you have a project that would benefit from highest / lowest testing, give it a try. Making more use of flexible dependency version constraints in commonly-used PHP libraries reduce “dependency hell” problems, and testing these variations will make them easier to maintain. Being fastidious in these practices will make it easier for everyone to use and adopt improved libraries more quickly and with less effort.

You may also like: 

Topics Development, Drupal Planet, Drupal
Dec 05 2017
Dec 05

(This announcement by Mateu Aguiló Bosch is cross-posted from Medium. Contenta is a community project that some Lullabots are actively contributing to.)

Contenta CMS reaches 1.0

A lot has happened in the last few months since we started working on Contenta CMS. The process has been really humbling. Today we release Contenta CMS 1.0: Celebrate!

If you don’t know what Contenta CMS is, then visit to learn more. And if you are more curious check to see the public facing side of a Contenta CMS installation. To check the more interesting features in the private admin interface install it locally with one command.

The Other Side

When we decided to kick off the development of Contenta we speculated that someone would step in and provide front-end examples. We didn’t predict the avalanche of projects that would come. Looking back we can safely conclude that a big part of the Drupal community was eager to move to this model that allows us to use more modern tools.

multi-screen icon

We are not surprised to see that the tech context has changed, that novel interfaces are now common, or that businesses realize the value of multi-channel content distribution. That was expected.

We did not expect to see how long time Drupal contributors would jump in right away to write consumers for the API generated by Contenta. We could not sense the eagerness of so many Drupal developers to use Drupal in another way. It was difficult to guess that people would collaborate a Docker setup. We were also surprised to see the Contenta community to rally around documentation, articles, tutorials, and the explanation site. We didn’t anticipate that the core developers of three major frameworks would take interest on this and contribute consumers. Very often we woke up to unread messages in the Contenta channel with an interesting conversation about a fascinating topic. We didn’t think of that when Contenta was only a plan in our heads.

We are humbled by how much we’ve done these months, the Contenta CMS community did not cease to amaze.

The Drupal Part

Over the course of the last several months, we have discussed many technical and community topics. We have agreed more often than not, disagreed and come to an understanding, and made a lot of progress. As a result of it, we have developed and refactored multiple Drupal modules to improve the practical challenges that one faces on a decoupled project.

Drupal Logo

We are very glad that we based our distribution on a real-world example. Many consumers have come across the same challenges at the same time from different perspectives. That is rare in an organization since it is uncommon to have so many consumers building the same product. Casting light on these challenges from multiple perspectives has allowed us to understand some of the problems better. We had to fix some abstractions, and in some other cases an abstraction was not possible and we had to become more practical.

One thing that has remained constant is that we don’t want to support upgrade paths, we see Contenta as a good starting point. Fork and go! When you need to upgrade Drupal and its modules, you do it just like with any other Drupal project. No need to upgrade Contenta CMS itself. After trying other distributions in the past, and seeing the difficulties when using and maintaining both, we made a clear decision that we didn’t need to support that.

Contenta Logo

This tagged release is our way of saying to the world: We are happy about the current feature set, we feel good about the current stability, and this point in time is a good forking point. We will continue innovating and making decoupled Drupal thrive, but from now we’ll have Contenta CMS 1.0: Celebrate on our backs as a stable point in time.

With this release, we are convinced that you can use Contenta as a starter kit and hub for documentation. We are happy about your future contributions to this kit and hub.

See the features in the release notes in GitHub, read Mateu's previous Contenta article, and celebrate Contenta with us!

Thanks to Sally Young for her help with grammar and readability in this article.

Hero image by Pablo Heimplatz 

Dec 05 2017
Dec 05

This blog has been re-posted and edited with permission from Dries Buytaert's blog. Please leave your comments on the original post.

Earlier this year, the Commonwealth of Massachusetts launched on Drupal 8. Holly St. Clair, the Chief Digital Officer of the Commonwealth of Massachusetts, joined me during my Acquia Engage keynote to share how is making constituents' interactions with the state fast, easy, meaningful, and "wicked awesome".

[embedded content]

Constituents at the center

Today, 76% of constituents prefer to interact with their government online. Before switched to Drupal it struggled to provide a constituent-centric experience. For example, a student looking for information on tuition assistance on would have to sort through 7 different government websites before finding relevant information. - Before and After

To better serve residents, businesses and visitors, the team took a data-driven approach. After analyzing site data, they discovered that 10% of the content serviced 89% of site traffic. This means that up to 90% of the content on was either redundant, out-of-date or distracting. The digital services team used this insight to develop a site architecture and content strategy that prioritized the needs and interests of citizens. In one year, the team at moved a 15-year-old site from a legacy CMS to Drupal.

The team at also incorporated user testing into every step of the redesign process, including usability, information architecture and accessibility. In addition to inviting over 330,000 users to provide feedback on the pilot site, the team partnered with the Perkins School for the Blind to deliver meaningful accessibility that surpasses compliance requirements. This approach has earned a score of 80.7 on the System Usability Scale; 12 percent higher than the reported average.

Open from the start

As an early adopter of Drupal 8, the Commonwealth of Massachusetts decided to open source the code that powers Everyone can see the code that make work, point out problems, suggest improvements, or use the code for their own state. It's inspiring to see the Commonwealth of Massachusetts fully embrace the unique innovation and collaboration model inherent to open source. I wish more governments would do the same!


The new is engaging, intuitive and above all else, wicked awesome. Congratulations!

Dec 05 2017
Dec 05

This blog has been re-posted and edited with permission from Dries Buytaert's blog. Please leave your comments on the original post.

Cable squeeze

Last month, the Chairman of the Federal Communications Commission, Ajit Pai, released a draft order that would soften net neutrality regulations. He wants to overturn the restrictions that make paid prioritization, blocking or throttling of traffic unlawful. If approved, this order could drastically alter the way that people experience and access the web. Without net neutrality, Internet Service Providers could determine what sites you can or cannot see.

The proposed draft order is disheartening. Millions of Americans are trying to save net neutrality; the FCC has received over 5 million emails, 750,000 phone calls, and 2 million comments. Unfortunately this public outpouring has not altered the FCC's commitment to dismantling net neutrality.

The commission will vote on the order on December 14th. We have 10 days to save net neutrality.

Although I have written about net neutrality before, I want to explain the consequences and urgency of the FCC's upcoming vote.

What does Pai's draft order say?

Chairman Pai has long been an advocate for "light touch" net neutrality regulations, and claims that repealing net neutrality will allow "the federal government to stop micromanaging the Internet".

Specifically, Pai aims to scrap the protection that classifies ISPs as common carriers under Title II of the Communications Act of 1934. Radio and phone services are also protected under Title II, which prevents companies from charging unreasonable rates or restricting access to services that are critical to society. Pai wants to treat the internet differently, and proposes that the FCC should simply require ISPs "to be transparent about their practices". The responsibility of policing ISPs would also be transferred to the Federal Trade Commission. Instead of maintaining the FCC's clear-cut and rule-based approach, the FTC would practice case-by-case regulation. This shift could be problematic as a case-by-case approach could make the FTC a weak consumer watchdog.

The consequences of softening net neutrality regulations

At the end of the day, frail net neutrality regulations mean that ISPs are free to determine how users access websites, applications and other digital content.

It is clear that depending on ISPs to be "transparent" will not protect against implementing fast and slow lanes. Rolling back net neutrality regulations means that ISPs could charge website owners to make their website faster than others. This threatens the very idea of the open web, which guarantees an unfettered and decentralized platform to share and access information. Gravitating away from the open web could create inequity in how communities share and express ideas online, which would ultimately intensify the digital divide. This could also hurt startups as they now have to raise money to pay for ISP fees or fear being relegated to the "slow lane".

The way I see it, implementing "fast lanes" could alter the technological, economic and societal impact of the internet we know today. Unfortunately it seems that the chairman is prioritizing the interests of ISPs over the needs of consumers.

What can you can do today

Chairman Pai's draft order could dictate the future of the internet for years to come. In the end, net neutrality affects how people, including you and me, experience the web. I've dedicated both my spare time and my professional career to the open web because I believe the web has the power to change lives, educate people, create new economies, disrupt business models and make the world smaller in the best of ways. Keeping the web open means that these opportunities can be available to everyone.

If you're concerned about the future of net neutrality, please take action. Share your comments with the U.S. Congress and contact your representatives. Speak up about your concerns with your friends and colleagues. Organizations like The Battle for the Net help you contact your representatives — it only takes a minute!

Now is the time to stand up for net neutrality: we have 10 days and need everyone's help.

Dec 04 2017
Dec 04

With the release of Drupal 8.4.x and its use of ES6 (Ecmascript 2015) in Drupal core we’ve started the task of updating our jQuery plugins/widgets to use the new syntax. This post will cover what we’ve learnt so far and what the benefits are of doing this.

If you’ve read my post about the Asset Library system you’ll know we’re big fans of the Component-Driven Design approach, and having a javascript file per component (where needed of course) is ideal. We also like to keep our JS widgets generic so that the entire component (entire styleguide for that matter) can be used outside of Drupal as well. Drupal behaviours and settings are still used but live in a different javascript file to the generic widget, and simply call it’s function, passing in Drupal settings as “options” as required.

Here is an example with an ES5 jQuery header component, with a breakpoint value set somewhere in Drupal:

@file header.js

(function ($) {

 // Overridable defaults
 $.fn.header.defaults = {
   breakpoint: 700,
   toggleClass: 'header__toggle',
   toggleClassActive: 'is-active'

 $.fn.header = function (options) {
   var opts = $.extend({}, $.fn.header.defaults, options);
   return this.each(function () {
     var $header = $(this);
     // do stuff with $header

@file header.drupal.js

(function ($, Drupal, drupalSettings) {
 Drupal.behaviors.header = {
   attach: function (context) {
     $('.header', context).header({
       breakpoint: drupalSettings.my_theme.header.breakpoint
})(jQuery, Drupal, drupalSettings);

Converting these files into a different language is relatively simple as you can do one at a time and slowly chip away at the full set. Since ES6 is used in the popular JS frameworks it’s a good starting point for slowly moving towards a “progressively decoupled” front-end.

Support for ES6

Before going too far I should mention support for this syntax isn’t quite widespread enough yet! No fear though, we just need to add a “transpiler” into our build tools. We use Babel, with the babel-preset-env, which will convert our JS for us back into ES5 so that the required older browsers can still understand it.

Our Gulp setup will transpile any .es6.js file and rename it (so we’re not replacing our working file), before passing the renamed file into out minifying Gulp task.

With the Babel ENV preset we can specify which browsers we actually need to support, so that we’re doing the absolute minimum transpilation (is that a word?) and keeping the output as small as possible. There’s no need to bloat your JS trying to support browsers you don’t need to!

import gulp from 'gulp';
import babel from 'gulp-babel';
import path from 'path';
import config from './config';

// Helper function for renaming files
const bundleName = (file) => {
 file.dirname = file.dirname.replace(/\/src$/, '');
 file.basename = file.basename.replace('.es6', '');
 file.extname = '.bundle.js';
 return file;

const transpileFiles = [
 // Ignore already minified files.
 // Ignore bundle files, so we don’t transpile them twice (will make more sense later)

const transpile = () => (
 gulp.src(transpileFiles, { base: './' })
     presets: [['env', {
       modules: false,
       useBuiltIns: true,
       targets: { browsers: ["last 2 versions", "> 1%"] },
   .pipe(rename(file => (bundleName(file))))

transpile.description = 'Transpile javascript.';
gulp.task('scripts:transpile', transpile);

Which uses:

$ yarn add path gulp gulp-babel babel-preset-env --dev

On a side note, we’ll be outsourcing our entire Gulp workflow real soon. We’re just working through a few extra use cases for it, so keep an eye out!

Learning ES6

Reading about ES6 is one thing but I find getting into the code to be the best way for me to learn things. We like to follow Drupal coding standards so point our eslint config to extend what’s in Drupal core. Upgrading to 8.4.x obviously threw a LOT of new lint errors, and was usually disabled until time permitted their correction. But you can use these errors as a tailored ES6 guide. Tailored because it’s directly applicable to how you usually write JS (assuming you wrote the first code).

Working through each error, looking up the description, correcting it manually (as opposed to using the --fix flag) was a great way to learn it. It took some time, but once you understand a rule you can start skipping it, then use the --fix flag at the end for a bulk correction.

Of course you're also a Google away from a tonne of online resources and videos to help you learn if you prefer that approach!

ES6 with jQuery

Our original code is usually in jQuery, and I didn’t want to add removing jQuery into the refactor work, so currently we’re using both which works fine. Removing it from the mix entirely will be a future task.

The biggest gotcha was probably our use of this, once converted to arrow functions needed to be reviewed. Taking our header example from above:

return this.each(function () { var $header = $(this); }

Once converted into an arrow function, using this inside the loop is no longer scoped to the function. It doesn’t change at all - it’s not an individual element of the loop anymore, it’s still the same object we’re looping through. So clearly stating the obj as an argument of the .each() function lets us access the individual element again.

return this.each((i, obj) => { const $header = $(obj); }

Converting the jQuery plugins (or jQuery UI widgets) to ES6 modules was a relatively easy task as well… instead of:

(function ($) {

 // Overridable defaults
 $.fn.header.defaults = {
   breakpoint: 700,
   toggleClass: 'header__toggle',
   toggleClassActive: 'is-active'

 $.fn.header = function (options) {
   var opts = $.extend({}, $.fn.header.defaults, options);
   return this.each(function () {
     var $header = $(this);
     // do stuff with $header


We just make it a normal-ish function:

const headerDefaults = {
 breakpoint: 700,
 toggleClass: 'header__toggle',
 toggleClassActive: 'is-active'

function header(options) {
 (($, this) => {
   const opts = $.extend({}, headerDefaults, options);
   return $(this).each((i, obj) => {
     const $header = $(obj);
     // do stuff with $header
 })(jQuery, this);

export { header as myHeader }

Since the exported ES6 module has to be a top level function, the jQuery wrapper was moved inside it, along with passing through the this object. There might be a nicer way to do this but I haven't worked it out yet! Everything inside the module is the same as I had in the jQuery plugin, just updated to the new syntax.

I also like to rename my modules when I export them so they’re name-spaced based on the project, which helps when using a mix of custom and vendor scripts. But that’s entirely optional.

Now that we have our generic JS using ES6 modules it’s even easier to share and reuse them. Remember our Drupal JS separation? We no longer need to load both files into our theme. We can import our ES6 module into our .drupal.js file then attach it as a Drupal behaviour. 

@file header.drupal.js

import { myHeader } from './header';

(($, { behaviors }, { my_theme }) => {
 behaviors.header = {
   attach(context) {$('.header', context), {
       breakpoint: my_theme.header.breakpoint
})(jQuery, Drupal, drupalSettings);

So a few differences here, we're importing the myHeader function from our other file,  we're destructuring our Drupal and drupalSettings arguments to simplify them, and using .call() on the function to pass in the object before setting its arguments. Now the header.drupal.js file is the only file we need to tell Drupal about.

Some other nice additions in ES6 that have less to do with jQuery are template literals (being able to say $(`.${opts.toggleClass}`) instead of $('.' + opts.toggleClass')) and the more obvious use of const and let instead of var , which are block-scoped.

Importing modules into different files requires an extra step in our build tools, though. Because browser support for ES6 modules is also a bit too low, we need to “bundle” the modules together into one file. The most popular bundler available is Webpack, so let’s look at that first.

Bundling with Webpack

Webpack is super powerful and was my first choice when I reached this step. But it’s not really designed for this component based approach. Few of them are truly... Bundlers are great for taking one entry JS file which has multiple ES6 modules imported into it. Those modules might be broken down into smaller ES6 modules and at some level are components much like ours, but ultimately they end up being bundled into ONE file.

But that’s not what I wanted! What I wanted, as it turned out, wasn’t very common. I wanted to add Webpack into my Gulp tasks much like our Sass compilation is, taking a “glob” of JS files from various folders (which I don’t really want to have to list), then to create a .bundle.js file for EACH component which included any ES6 modules I used in those components.

The each part was the real clincher. Getting multiple entry points into Webpack is one thing, but multiple destination points as well was certainly a challenge. The vinyl-named npm module was a lifesaver. This is what my Gulp talk looked like:

import gulp from 'gulp';
import gulp-webpack from 'webpack-stream';
import webpack from 'webpack'; // Use newer webpack than webpack-stream
import named from 'vinyl-named';
import path from 'path';
import config from './config';

const bundleFiles = [
 config.js.src + '/**/src/*.js',
 config.js.modules + '/**/src/*.js',

const bundle = () => (
 gulp.src(bundleFiles, { base: "./" })
   // Define [name] with the path, via vinyl-named.
   .pipe(named((file) => {
     const thisFile = bundleName(file); // Reuse our naming helper function
     // Set named value and queue.
     thisFile.named = thisFile.basename;
   // Run through webpack with the babel loader for transpiling to ES5.
     output: {
       filename: '[name].bundle.js', // Filename includes path to keep directories
     module: {
       loaders: [{
         test: /\.js$/,
         exclude: /node_modules/,
         loader: 'babel-loader',
         query: {
           presets: [['env', { 
             modules: false, 
             useBuiltIns: true, 
             targets: { browsers: ["last 2 versions", "> 1%"] }, 
   }, webpack))
   .pipe(gulp.dest('./')) // Output each [name].bundle.js file next to it’s source

bundle.description = 'Bundle ES6 modules.';
gulp.task('scripts:bundle', bundle);

Which required:

$ yarn add path webpack webpack-stream babel-loader babel-preset-env vinyl-named --dev

This worked. But Webpack has some boilerplate JS that it adds to its bundle output file, which it needs for module wrapping etc. This is totally fine when the output is a single file, but adding this (exact same) overhead to each of our component JS files, it starts to add up. Especially when we have multiple component JS files loading on the same page, duplicating that code.

It only made each component a couple of KB bigger (once minified, an unminified Webpack bundle is much bigger), but the site seemed so much slower. And it wasn’t just us, a whole bunch of our javascript tests started failing because the timeouts we’d set weren’t being met. Comparing the page speed to the non-webpack version showed a definite impact on performance.

So what are the alternatives? Browserify is probably the second most popular but didn’t have the same ES6 module import support. Rollup.js is kind of the new bundler on the block and was recommended to me as a possible solution. Looking into it, it did indeed sound like the lean bundler I needed. So I jumped ship!

Bundling with Rollup.js

The setup was very similar so it wasn’t hard to switch over. It had a similar problem about single entry/destination points but it was much easier to resolve with the ‘gulp-rollup-each’ npm module. My Gulp task now looks like:

import gulp from 'gulp';
import rollup from 'gulp-rollup-each';
import babel from 'rollup-plugin-babel';
import resolve from 'rollup-plugin-node-resolve';
import commonjs from 'rollup-plugin-commonjs';
import path from 'path';
import config from './config';

const bundleFiles = [
 config.js.src + '/**/src/*.js',
 config.js.modules + '/**/src/*.js',

const bundle = () => {
 return gulp.src(bundleFiles, { base: "./" })
     plugins: [
         presets: [['env', {
           modules: false,
           useBuiltIns: true,
           targets: { browsers: ["last 2 versions", "> 1%"] },
         babelrc: false,
         plugins: ['external-helpers'],
   }, (file) => {
     const thisFile = bundleName(file); // Reuse our naming helper function
     return {
       format: 'umd',
       name: path.basename(thisFile.path),
   .pipe(gulp.dest('./')); // Output each [name].bundle.js file next to it’s source

bundle.description = 'Bundle ES6 modules.';
gulp.task('scripts:bundle', bundle);

We don’t need vinyl-named to rename the file anymore, we can do that as a callback of gulp-rollup-each. But we need a couple of extra plugins to correctly resolve npm module paths.

So for this we needed:

$ yarn add path gulp-rollup-each rollup-plugin-babel babel-preset-env rollup-plugin-node-resolve rollup-plugin-commonjs --dev

Rollup.js does still add a little bit of boilerplate JS but it’s a much more acceptable amount. Our JS tests all passed so that was a great sign. Page speed tests showed the slight improvement I was expecting, having bundled a few files together. We're still keeping the original transpile Gulp task too for ES6 files that don't include any imports, since they don't need to go through Rollup.js at all.

Webpack might still be the better option for more advanced things that a decoupled frontend might need, like Hot Module Replacement. But for simple or only slightly decoupled components Rollup.js is my pick.

Next steps

Some modern browsers can already support ES6 module imports, so this whole bundle step is becoming somewhat redundant. Ideally the bundled file with it’s overhead and old fashioned code is only used on those older browsers that can’t handle the new and improved syntax, and the modern browsers use straight ES6...

Luckily this is possible with a couple of script attributes. Our .bundle.js file can be included with the nomodule attribute, alongside the source ES6 file with a type=”module” attribute. Older browsers ignore the type=module file entirely because modules aren’t supported and browsers that can support modules ignore the ‘nomodule’ file because it told them to. This article explains it more.

Then we'll start replacing the jQuery entirely, even look at introducing a Javascript framework like React or Glimmer.js to the more interactive components to progressively decouple our front-ends!

Photo of Rikki Bochow

Posted by Rikki Bochow
Front end Developer

Dated 5 December 2017

Add new comment

Dec 04 2017
Dec 04
Cable squeeze

Last month, the Chairman of the Federal Communications Commission, Ajit Pai, released a draft order that would soften net neutrality regulations. He wants to overturn the restrictions that make paid prioritization, blocking or throttling of traffic unlawful. If approved, this order could drastically alter the way that people experience and access the web. Without net neutrality, Internet Service Providers could determine what sites you can or cannot see.

The proposed draft order is disheartening. Millions of Americans are trying to save net neutrality; the FCC has received over 5 million emails, 750,000 phone calls, and 2 million comments. Unfortunately this public outpouring has not altered the FCC's commitment to dismantling net neutrality.

The commission will vote on the order on December 14th. We have 10 days to save net neutrality.

Although I have written about net neutrality before, I want to explain the consequences and urgency of the FCC's upcoming vote.

What does Pai's draft order say?

Chairman Pai has long been an advocate for "light touch" net neutrality regulations, and claims that repealing net neutrality will allow "the federal government to stop micromanaging the Internet".

Specifically, Pai aims to scrap the protection that classifies ISPs as common carriers under Title II of the Communications Act of 1934. Radio and phone services are also protected under Title II, which prevents companies from charging unreasonable rates or restricting access to services that are critical to society. Pai wants to treat the internet differently, and proposes that the FCC should simply require ISPs "to be transparent about their practices". The responsibility of policing ISPs would also be transferred to the Federal Trade Commission. Instead of maintaining the FCC's clear-cut and rule-based approach, the FTC would practice case-by-case regulation. This shift could be problematic as a case-by-case approach could make the FTC a weak consumer watchdog.

The consequences of softening net neutrality regulations

At the end of the day, frail net neutrality regulations mean that ISPs are free to determine how users access websites, applications and other digital content.

It is clear that depending on ISPs to be "transparent" will not protect against implementing fast and slow lanes. Rolling back net neutrality regulations means that ISPs could charge website owners to make their website faster than others. This threatens the very idea of the open web, which guarantees an unfettered and decentralized platform to share and access information. Gravitating away from the open web could create inequity in how communities share and express ideas online, which would ultimately intensify the digital divide. This could also hurt startups as they now have to raise money to pay for ISP fees or fear being relegated to the "slow lane".

The way I see it, implementing "fast lanes" could alter the technological, economic and societal impact of the internet we know today. Unfortunately it seems that the chairman is prioritizing the interests of ISPs over the needs of consumers.

What can you can do today

Chairman Pai's draft order could dictate the future of the internet for years to come. In the end, net neutrality affects how people, including you and me, experience the web. I've dedicated both my spare time and my professional career to the open web because I believe the web has the power to change lives, educate people, create new economies, disrupt business models and make the world smaller in the best of ways. Keeping the web open means that these opportunities can be available to everyone.

If you're concerned about the future of net neutrality, please take action. Share your comments with the U.S. Congress and contact your representatives. Speak up about your concerns with your friends and colleagues. Organizations like The Battle for the Net help you contact your representatives — it only takes a minute!

Now is the time to stand up for net neutrality: we have 10 days and need everyone's help.

Dec 04 2017
Dec 04

NOTE: This blog post is about a future release of Lightning. Lightning 2.2.4, with the migration path to Content Moderation, will be released Wednesday, December 6th.

The second of two major migrations this quarter is complete! Lightning 2.2.4 will migrate you off of Workbench Moderation and onto Core Workflows and Content Moderation. (See our blog post about Core Media, our first major migration.)

The migration was a three-headed beast:

  1. The actual migration which included migrating states and transitions into Workflows and migrating the states of individual entities into Content Moderation.
  2. Making sure other Lightning Workflow features continued to work with Content Moderation, including the ability to scheduled state transitions for content.
  3. Feature parity between Workbench Moderation and Content Moderation.
Tryclyde - the three-headed CM migration beast

The actual migration

Content Moderation was not a direct port of Workbench Moderation. It introduced the concept of Workflows which abstracts states and transitions from Content Moderation. As a result, the states and transitions that users had defined in WBM might not easily map to Workflows - especially if different content types have different states available.

To work around this, the migrator generates a hash of all available states per content type; then groups content types with identical hashes into Workflows. As an example, a site with the following content types and states would result in three Workflows as indicated by color:

WMB states/transition mapping to Workflows

The second half of the migration was making sure all existing content retained the correct state. Early prototypes used the batch API to process states, but this quickly because unscalable. In the end, we used the Migrate module to:

  1. Store the states of all entities and then remove them from the entities themselves.
  2. Uninstall Workbench Moderation and install Workflows + Content Moderation.
  3. Map the stored states back to their original entities as Content Moderation fields.

Note: This section of Lightning migration was made available as the contrib module WBM2CM. The rest of the migration is Lightning-specific.

Other Lightning Workflow features

Lightning Workflow does more than just provide states. Among other things, it also allows users to schedule state transitions. We have used the Scheduled Updates module for this since its introduction. Unfortunately, Scheduled Updates won't work with the computed field that is provided by Content Moderation. As a result, we ended up building a scheduler into Lightning Workflow itself.

Scheduled Updates is still appropriate and recommended for more complex scheduling - like for body fields or taxonomy term names. But for the basic content state transitions (i.e., publish this on datetime) you can use native Lightning Workflow.

As an added bonus, we sidestep a nasty translation bug (feature request?) that has been giving us problems with Scheduled Updates.

Feature parity

While Workflows is marked as stable in Core, Content Moderation is still in beta. This is partially because it's still missing some key features and integrations that Lightning uses. Specifically, Lightning has brought in patches and additional code so that we can have basic integration between Content Moderation ↔ Views and Content Moderation ↔ Quick Edit.

Want to try it out?

Assuming a standard Composer setup, you can update to the latest Lightning with the following. The migration is included in Lightning 2.2.4 and above:

$ composer update acquia/lightning --with-dependencies

Once you have updated your code, you can have Lightning automatically apply all pending updates, including the Content Moderation migration with the following (recommended):

$ /path/to/console/drupal update:lightning --no-interaction

Or you can just enable the WBM2CM module manually and trigger the migration with:

$ drush wbm2cm-migrate
Dec 04 2017
Dec 04

Someone once said, “if you have to explain the joke, it takes the fun out of it.” Well, the same can be said for designing a website. Explaining the important and sometimes technical details can be a tedious process many designers would avoid if possible. But when it comes to communicating the form and function of the user experience through wireframes, explaining each element can make or break the project. It’s always a good idea to include annotations.

Types of Wireframes

First, let’s start at the beginning to avoid vague deliverables. Does every UX designer you’ve met create similar looking wireframes? There are about as many different styles of wireframes as there are websites on the world wide web, so chances are the term ‘wireframe’ by itself really isn’t saying much to your client.

Most folks understand a wireframe to be a basic layout for a web page, in shades of grey. But other than that, what else should they expect? Will it be a basic paper drawing or a functional computer drawn detailed webpage? Usually the type of wireframe provided is based on the budget and pace of the project, along with considerations for the type of site to be built, the scale of the overall project, and the depth of the sitemap. Sometimes more than one type of wireframe is used when user testing is needed. Once the most appropriate wireframe approach has been determined for the project, it’s a good idea to convey the definition in your deliverable documentation. Here is an example:

Low Fidelity Wireframe

  • Can be paper sketches or computer drawn.
  • Abstract lines and shapes that represent spatial elements on the page.
  • No actual copy is used, just lines to represent text.
  • Descriptions for content and functionality (as needed) are included directly on (or near) the elements.

Medium Fidelity Wireframe

  • Shows the content framework, layout structure, and hierarchy of key elements using basic lines, shapes, and boxes to represent images, icons, etc.
  • Copy inserted where available, otherwise lorem ipsum is used as a placeholder.
  • Presented in shades of grey with no styles or graphics applied.
  • Functions have labels.
  • Descriptions for content and functionality provided in annotations.

High Fidelity Wireframe

  • More details are included for each component along with additional views for features like modal windows or dropdown filters.
  • May include specific typeface choices to begin to show hierarchy.
  • Spacing between elements is further refined.
  • Determinations between image vs. illustrations may be eluded to.
  • Remains without color.

Why Annotations are Important

So you’ve just created a beautiful set of wireframes, had a great client presentation, and they’ve approved them. Great job! But that’s only part of it. Just because you’re ready to share the wireframes with the developers so they can begin their technical discovery, it doesn’t mean they have the same shared understanding about them as you and your client. Often times in an Agile development process, not everyone involved is present for every meeting during the discovery process due to the pace of the project and/or budget. Even when everyone is present, we don’t want to make them rely on sheer memory alone for how each component is going to be built or how it will function. That’s where the ever important Annotations come in. By providing fully annotated wireframes to both developers and your client will help to keep everyone on the same page for what to expect at launch.

The Art of Annotating

Once those medium or high fidelity wireframes have been created, it’s time to add the important information so we can ensure everyone has a clear understanding of the functionality that is expected for each component. The most important thing to remember when annotating your work is that it should speak for you when you’re not there.

Where Should The Annotations Go?

Typically, annotations are placed on the side or the bottom of a wireframe in a numbered list of descriptions with corresponding numbers next to the actual element on the wireframe. The language is kept brief in order to include as much information as possible within a limited physical space. It’s helpful to use color combinations that stand out from the wireframe to avoid confusion between elements. And when you have multiple audiences you need to address, it’s helpful to tailor your annotations by creating different sets for each.

Who Are The Annotations For?

Not that you need to include them all, but there are typically up to 5 different audiences who have different needs but may benefit from wireframe annotations:

The Client wants to understand how each element incorporates their business goals. When designing for Drupal, annotations are also the perfect place to highlight when certain components will be manually curated or fed automatically to ensure the expected maintenance after launch is clear.

The Front-End Developer wants to see where the images and icons are placed, what size and aspect ratio they are, defined page margins, amount of padding between elements, and the functionality for all interactive components.

The Back-End Developer can benefit from notes on image sizes, functionality for interactive components, manually curated elements vs. automatic feeds. They can also incorporate helpful tips for the content author with specific direction.

The Copywriter can write more freely if you’re able to provide recommended character counts for each section within the component: Title, subtitle, descriptive text areas, etc. Notes for optional elements will also improve the quality of the final copy. And when content will be auto-fed, note it so they can move on!

The Designer for those instances where wireframes and designs are handled by different people, it’s helpful to include design direction or live links to reference examples. You’ve been involved in conversations with the client during wireframe development and understand their vision. This is the opportunity to communicate that information.

What Information Should The Annotations Include?

Don’t think you’re being helpful if you annotate everything, without a cause. The last thing you want as an end result is a wireframe that’s so overloaded with annotations it looks like the plans for the next spaceship launch. It can actually be more confusing if too many notes are present. Smart wireframe design can alleviate the need for certain descriptive annotations when you include the descriptions within the titles of the element itself:

  • The title ‘Logo’ or ‘Icon’ sits inside of the circle.
  • The Button title reads ‘Optional, Editable Button’.
  • Required form fields have a red asterisk.
  • The opening ‘lorem ipsum’ descriptive copy instead starts with “This is an introductory paragraph that talks about scientific discoveries”.

What should you annotate?

  • Anything that is not obvious just by looking at the wireframe.
  • Conditional items: when and how to display them.
  • Cross-device differences. An example would be when an image may display on desktop but not on mobile. Or when an image may be replaced with one of a different aspect ratio on mobile, etc.
  • Logical or technical constraints: maximum image size, longest possible password field length, etc.
  • Any hidden details: content behind additional tabs, items in a drop-down menu, etc.
  • Functional items: links, buttons: expected behavior when the user clicks. This also includes how filters and form fields should respond.

Here are a few common areas of interaction states, feedback, and gestures that benefit from detailed annotations:

Dropdown Lists

Include the expanded states of the drop down menus on a separate page with annotations.

Dynamic Content

Explanations for things like search results listings, news content that will auto feed, image or video galleries, pagination.

File Uploads and Downloads

Provide interactions for the file upload. This may include things like file upload dialog, file upload progress bar, file upload completion notice.

Messages and Modals

Interactive feedback for form validation, confirmation, warnings, alerts, failed actions, errors, etc.


Account for the longest number of digits that will be needed for any confined spaces. Example: Should it display as 100,000,000 or 100M ?

Titles and Labels

Account for the longest names potentially possible. Provide solutions for cases when the lines should not break: ie, within tables, accordions, filters, etc.


Provide visual reference and implied behavior for tooltips and try to account for the maximum characters that will be needed.


Especially when annotating mobile wireframes, your intentions will not be lost if you add directional notes for specific gestures, such as:

  • Single-click (or tap)
  • Double-click (or tap)
  • Right-click (or tap)
  • Hover (provide mobile alternative)
  • Swipe
  • Pinch & spread
  • Drag & Drop
  • Default link for phone, email, map
  • External web links

Wrap it Up

Fully defining the wireframe process in the beginning of the project will clarify expectations for the client. Wrapping those completed wireframes in an annotated package will help to keep everyone involved working towards the same goals, and avoid disputes, disappointments, and costly reworks down the line.

Additional Resources
Designer to Developer: How to Go From Paper to Style Guide | Blog
Why Your Website Project Needs a UX Designer | Blog
For the LOVE of User Experience Design | Blog

Dec 04 2017
Dec 04

We are so glad to announce that Matthias Walti decided to join LakeDrops. He brings skills and experience in building e-commerce solutions, is a user experience expert and is well known for writing great content which is driven by his marketing background. The LakeDrops team is excited to have Matthias on board which will help the focused network to address sophisticated digital challenges in even more business areas. Matthias, who is based in Switzerland joins Richard (Spain) and Jürgen (Germany) who started LakeDrops a couple of years ago to collaboratively define, develop and publish best practices and products based on .

Dec 04 2017
Dec 04

Once a text field has data stored, it is not very easy or obvious how to change its maximum length. In the UI there is a message warning you that the field cannot be changed, because there is existing data. Sometimes it is necessary to change these values. It seems that there are a few ways and some resources to do this in Drupal 7, but I could not find a way to do this in Drupal 8. I decided to create a small function to do it:

Caution: Any change in the database needs to be done carefully before you continue please create a backup of your database.

 * Update the length of a text field which already contains data.
 * @param string $entity_type_id
 * @param string $field_name
 * @param integer $new_length
function _module_change_text_field_max_length ($entity_type_id, $field_name, $new_length) {
  $name = '' . $entity_type_id . "." . $field_name;
  // Get the current settings
  $result = \Drupal::database()->query(
    'SELECT data FROM {config} WHERE name = :name',
    [':name' => $name]
  $data = unserialize($result);
  $data['settings']['max_length'] = $new_length;
    ->fields(array( 'data' => serialize($data)))
    ->condition('name', $name)
  // Write settings back to the database.
  // Update the value column in both the _data and _revision tables for the field
  $table = $entity_type_id . "__" . $field_name;
  $table_revision = $entity_type_id . "_revision__" . $field_name;
  $new_field = ['type' => 'varchar', 'length' => $new_length];
  $col_name = $field_name . '_value';
  \Drupal::database()->schema()->changeField($table, $col_name, $col_name, $new_field);
  \Drupal::database()->schema()->changeField($table_revision, $col_name, $col_name, $new_field);
  // Flush the caches.

This method needs the name of the entity, the name of the field and the name and the new length.

And we can use it like this:

   _module_change_text_field_max_length('node', 'field_text', 280);

Usually, this code should be placed in (or called from) a hook_update so it will be executed automatically in the update.

And if the new length is too long to be placed in a regular input area, you can use the Textarea widget for text fields which will allow you to use the larger text area form element for text fields.

Dec 04 2017
Dec 04

GraphQL is becoming more and more popular every day. Now that we have a beta release of the GraphQL module (mainly sponsored and developed by Amazee Labs) it's easy to turn Drupal into a first-class GraphQL server. In this series, we'll try to provide an overview of its features and see how they translate to Drupal.

In the last post we covered the basic building blocks of GraphQL queries. We started with the naming conventions, then we took a look at how and when to use fragments. Finally, we moved on to aliases, which can be used to change names of the fields as well as to use the same field more than once in the same block. This week we'll delve into the ambiguous concept of Fields.

What exactly are GraphQL fields?

Fields are the most important of any GraphQL query. In the following query nodeById, title, entityOwner, and name are all fields.


Each GraphQL field needs to have a type that is stored in the schema. This means that it has to be known up front and cannot be dynamic. At the highest level, there are two types of values a field can return: a scalar and an object.


Scalar fields are leafs of any GraphQL query. They have no subfields and they return a concrete value, title and name in the above query are scalars. There are a few core scalar types in GraphQL, e.g.:

  • ​String: A UTF‐8 character sequence.
  • Int: A signed 32‐bit integer.
  • Float: A signed double-precision floating-point value.
  • Boolean: true or false.

If you're interested in how Drupal typed data is mapped to GraphQL scalars check out the the graphql_core.type_map service parameter and graphql_core.type_mapper service.

Complex types

Objects, like nodeById and entityOwner in the query above, are collections of fields. Each field that is not a scalar has to have at least one sub-field specified. The list of available sub-fields is defined by the object's type. If we paste the query above into graphiQL (/graphql/explorer), we'll see that the entityOwner field is of type User and name is one of User's subfields (of type String).


Fields can also have arguments. Each argument has a name and a type. In the example above nodeById field takes two arguments: id (String) and langcode. The same field can be requested more than once with a different set of arguments by using aliases, as we've seen in the last post.

How do Drupal fields become GraphQL fields?

One of the great new traits of Drupal 8 core is the typed data system. In fact, this is the feature that makes it possible for GraphQL to expose Drupal structures in a generic way. For the sake of improving the developer experience, especially the experience of the developers of decoupled frontends, Drupal fields have been divided into two groups.

Multi-property fields

The first group comprises all the field types that have more than one property. These fields are objects with all their properties as sub-fields.

This is how we'd retrieve values of a formatted text field (body) and a link field. Date fields and entity references also fall into this category. The latter have some unique features so let's check them out.

Entity reference fields

This type of field has 2 properties: the scalar target_id and the computed entity. This special property inherits its type from the entity that the field is pointing to. Actually, we've already seen that in the named fragments example in the last post, where fieldTags and fieldCategory were both term reference fields. Let's bring a simplified example.

Since fieldCategory links to terms, its entity property is of type TaxonomyTerm. We can go further.

The entityOwner property is of type User, so we get their email. Apparently, we can go as deep as the entity graph is. The following query is perfectly valid too.

It retrieves the title of an article that is related to the article that is related to the article with node id one and this is where GraphQL really shines. The query is relatively simple to read, it returns a simple-to-parse response and does it all in one request. Isn't it just beautiful? :)

Single-property fields

The second group comprises all the field types that have exactly one property (usually called value), like core plain text fields, email, phone, booleans and numbers. There's been a discussion about whether such fields should be rolled up to scalars or remain single-property objects. The former option won and in 3.x members of this group have no sub-fields.

That's it for the fields. Next week we're going to talk about... fields again :) but this time we'll see how to create one.

Dec 04 2017
Dec 04

This month we did something a little bit different with the meet-up format. Instead of one person presenting a slide deck, we had a panel discussion on all things accessibility with four accessibility experts - Eric Bailey, Helena McCabe, Scott O'Hara, and Scott Vinkle!

Eric Bailey is a UX Designer at Cantina in the greater Boston area. He prides himself on creating straightforward solutions that address a person’s practical, physical, cognitive, and emotional needs using accessible, performant, device-agnostic technology. Which roughly translates into: he makes accessible websites.

Helena McCabe is a senior front-end developer at Lullabot and lives in Orlando, Florida. She is passionate about creating online experiences that are accessible to everyone. She is also one-half of the Accessibots team on Twitter that promotes website accessibility.

Scott O’Hara is an accessibility engineer at The Paciello Group and is also from the Boston area. When not at work, you can find him adding code to various accessibility-focused GitHub repos including many of his own.

Scott Vinkle is a front-end developer and website accessibility advocate at Shopify. He’s from the great city of Kingston, Ontario and a big part of the a11yproject as well as many other accessibility organizations.

There were some questions lined up to keep the conversation going, but we ended up having some amazing on-the-fly questions from the audience, so it was a bit more spontaneous and a lot of fun!


1. When did you first start working in website accessibility and why?

Family reasons, knew someone with a disability, wanting to help, and wanting to give back to the community were all reasons brought up during the discussion.

2. What do you find to be the most common accessibility mistakes?

  • Lack of education/know-how
  • Lack of planning
  • Lack of resources put towards accessibility
  • Not designing with accessibility in mind
  • Adding ARIA and other “accessible” markup that actually makes the accessibility worse
  • Never going to have 100% accessibility, so try things out, but test and retest
  • Lack of testing
  • Not understanding there are all types of disabilities and assistive technologies to build for
  • Missing contact form/phone number/email or another way for people to be able to tell the website maintainers if they find an issue

3. What are good examples of types of disabilities that have unique accessibility problems and how are they best addressed? Are any of these just not really covered by WCAG 2.0?

  • By addressing WCAG 2.0 AA that is a really good start, but does not cover all the recent tech changes (example: mobile). WCAG 2.1 does do a better job at that, but those rules are still being worked on.
  • But by just going down the WCAG 2.0 checklist (or even the 2.1 version), does not mean that you have achieved accessibility.
  • Some people do not think about “disability” in broader terms, they might just think of more major disabilities such as deafness or blindness. People are extremely variable and there are a lot of things to consider. Examples include:
    • Vestibular issues where parallax can make them seasick
    • Learning disability and you need more time to read a scrolling carousel
    • Temporary/situational disabilities
    • Cognitive impairments (biological or situational)
    • Additionally, some people may have limitations, but do not see themselves as “disabled”. But these people could also benefit from accessible websites (example: color blindness, hard of hearing, wearing glasses).

4. How accessible does hamburger navigation tend to be? Have any libraries or examples you like?

5. What are your top three accessibility tools? What automatic tools do you use in accessibility testing?

6. Have you started to incorporate WCAG 2.1 guidelines/techniques? In your review of it, is there anything you’re already doing, and if so, can you highlight one?

7. What do you think accessibility will look like in 5 years?

  • Siri and Alexa and other machine vision integrations
  • 2D image recognition and what that morphs into might be interesting
  • “Internet of things” (example: house being wired to be voice or touch controlled)
  • Virtual reality integration with accessibility in mind (example: blind gamer who was able to “see” motion when using VR)
  • The area of augmented reality (beyond VR) will also be interesting in the next five years
  • Explosion of accessibility tools lately, thinking this will only expand

8. What is the one thing you would say to new people starting off in accessibility?

Scott O: Just care and do your best.

Scott V: Talk to people, go to local meet-ups/conferences, read things, but also do things, such as contributing to

Helena: Web accessibility is not an obligation or a thing to check off a list. Think of website accessibility as the exciting challenge that it is that will positively impact your users and have a positive attitude about it.

Eric: Don’t be afraid to get your feet wet. There is always more work to be done, but you need to start somewhere.


YouTube Video

[embedded content]

Drupal Accessibility Group

Join the Accessibility group on for hints, tips, discussions, and patch proposals to help make Drupal more inclusive.

Next Month

A11Y Talks  - December 20th, 2017 (12pm ET)

Topic: Accessibility Priorities (plus some demos of cool new accessibility things)

Speaker: To be announced soon!

Dec 04 2017
Dec 04

Since Dries' keynote at the DrupalCon in Vienna how Drupal is for ambitious digital experiences, it became somehow more obvious how does its founder see the future. And what the agencies should focus on in a more than ever competing world of content management systems. Although we have to admit that some of the agencies already embraced the idea of building their businesses on delivering such digital experiences. 

So what does differentiate ambitious digital experiences from, let's say, websites? Or "just plain" digital experiences? And what qualities an ambitious digital experience has? It is by no means a simple endeavor to cover all the aspects of such an experience; this is why I will divide it into several parts. And try to cover as much as possible. But first I will start by clarifying the terms ambitious digital experiences one by one. I believe there is no need to lost too much space on Drupal, as we all know very well what is it capable of delivering. 


From ambition to ambitious

I don't want to dwell too long and try to figure out how Dries came about this idea but there had to be an underlying ambition. An ambition to steer Drupal (and agencies) in a direction where we would be able to differentiate ourselves. Not just from direct competitors but also from simpler SaaS solutions. So, there is a clearly defined and stated ambition by Dries what he thinks Drupal should aim for. And he is ambitious in achieving that. But the more important issue is that this ambition is also nurtured in digital agencies to embrace the idea and become (more) ambitious. Some of them have been already doing this, focusing their business on ambitious digital experiences, some of them embraced the idea but not fully incorporated it into its daily dealings with clients, and there are probably also some agencies who still cope with accepting it.

Back to ambitious. What do dictionaries say about this word? It is an adjective. And it can mean having a strong desire and determination to succeed. Some synonyms are motivated, eager, energetic, committed, driven, enterprising,... And the other meaning is connected to digital experience. A piece of work intended to satisfy high aspirations and also difficult to achieve. Bulls-eye. An ambitious digital experience for whatever client does have high aspirations, it does have a set of goals it wants to achieve with it and, of course, there is a lot at stake. But is it also difficult to achieve? Probably the harder part of this journey is getting clients who appreciate and utilize the value of such a digital experience. The technical part of delivering it should be the easier part of it. 


Digital, experience, digital experience

Let's start with digital. The most obvious definition would be the opposite of analog. Digital involves or somehow relates to the use of information technology. Computers distinguish just two values, 0 and 1, so all the data that the computer processes must be encoded digitally. As a series of ones and zeros. But again, let's move away from the technology debate and focus instead on what digital should represent and what can deliver. Somehow that doesn't feel it is enough. We have to move away from simple definitions and focus not on what digital is but to understand it as a way of doing things. To create it more concrete McKinsey's Dörner and Edelman broke down this definition into three attributes:
1. creating value at the new frontiers of the business world, 
2. creating value in the processes that execute a vision of customer experiences, and 
3. building foundational capabilities that support the entire structure.

We will return to those three attributes at a later time. But with defining digital as a way of doing things we are coming closer to creating experiences. Any digital device is just a tool until used in such a manner it creates an experience. 

Let's look at what dictionaries say about the experience. It can be a noun or a verb. If noun it can mean these two things. First, it can mean a contact with and/or observation of facts or events. Second, an event which leaves an impression on someone. Let's stick with the simple for now and build up to a digital experience. 

If we just merge the meanings of two words and try to come up with the definition of our own, we could say that digital experience is an interaction that takes place between an organization and a user by the use of digital technologies. Gartner does not have a definition of digital experience (they do have DEM - a Digital experience monitoring which is defined as an availability and performance monitoring discipline), but according to Mick MacComascaigh digital experience can be regarded as: "a composite set of experiential elements — including content, design and functional elements — delivered over a digital channel that has the highest probability of prompting any of the desired set of predefined behaviors."



Drupal and ambitious digital experiences

Content management systems play an important role in delivering digital experiences, but just managing the content does not deliver those ambitious experiences we want them. So we could move the debate away from what technology to use as a CMS to digital experience debate. For instance, when talking about a CMS, a question about the costs will be raised rather sooner than later. If we can talk about digital experiences we would probably not be talking about the cost, but about the ROI. And away from publishing content to communicating. From counting clicks and how easy-to-use it is to analyzing behavior and usability and learnability. From how efficient is a CMS to the effects of digital experience. 

Because in the end experience is much more than content. And best digital experiences should be natural, intuitive. And focused on the customers. 

I will focus on aspects or elements of ambitious digital experiences in the following blog posts. If you want to stay on top of it and deliver digital experiences focused on your clients' customers get in contact

Dec 02 2017
Dec 02

I recently hit a curveball with a custom migration to import location data into Drupal. I am helping a family friend merge their bespoke application into Drupal. Part of the application involves managing Locations. Locations have a parent location, so I used taxonomy terms to harness the relationship capabilities. Importing levels one, two, three, and four worked great! Then, all the sudden, when I began to import levels five and six I ran into an error: Specified key was too long; max key length is 3072 bytes

Migration error output

False positives

My first reaction was a subtle table flip and bang on the table. I had just spent the past thirty minutes writing a Kernel test which executed my migrations and verified the imported data. Everything was green. Term hierarchy was preserved. Run it live: table flip breakage. But, why?

When I run tests I do so with PHPUnit directly to just execute my one test class or a single module. Here is the snippet from my project's phpunit.xml

    <!-- Example SIMPLETEST_DB value: mysql://username:[email protected]/databasename#table_prefix -->
    <env name="SIMPLETEST_DB" value="sqlite://localhost/sites/default/files/.ht.sqlite"/>

Turns out when running SQLite there are no constraints on the index key length, or there's some other deeper explanation I didn't research. The actual site is running on MySQL/MariaDB. As soon as I ran my Console command to run the migrations the environment change revealed the failure.

Finding a possible fix

Following the wrong path to fixing a bug, I went in a frantic Google search trying to find an answer which asserted by expected fix: there is some bogus limitation on the byte size that I can somehow bypass. So reviewed the fact that InnoDB has the limit of 3072 bytes, and that MyISAM has a different one (later, I found it is worse at 1000 bytes and one reason why we're all using MyISAM.)

I found an issue on which was reported about the index key being too long. The bug occurs by having too many keys listed in the Migration source plugin. Keys identify unique rows in the migration source. They are also used as the index in the create migrate_map_MIGRATION_ID. So if you require too many keys for identifying unique rows, you will experience this error. In most cases, you can probably break up your CSV and normalize it to be it easier to parse.

Indexes were added in Drupal 8.3 to improve performance. So, I had to find a fix. First I tried swapping back to MyISAM and realizing that was a fool's errand, but I was desperate.

I thought about trying to normalize the CSV data and make different, smaller, files. But there was a problem: some locations share the same name but have different parent hierarchies. A location in Illinois could share the same name as a location in a Canadian territory. I needed to preserve the Country -> Administrative Area -> Locality values in a single CSV.

A custom idMap plugin to the rescue

If you have worked with the Migrate module in Drupal you are familiar with process plugins and possibly familiar with source plugins. The former helps you transform data and the latter brings data into the migration. The migrate also has an id_map plugin. There is one single plugin provided by Drupal core: sql. I never knew or thought about this plugin because it is not defined. In fact, we never have to.

   * {@inheritdoc}
  public function getIdMap() {
    if (!isset($this->idMapPlugin)) {
      $configuration = $this->idMap;
      $plugin = isset($configuration['plugin']) ? $configuration['plugin'] : 'sql';
      $this->idMapPlugin = $this->idMapPluginManager->createInstance($plugin, $configuration, $this);
    return $this->idMapPlugin;

If a migration does not provide an idMap definition it defaults to the core default sql mapping.

Hint: if you want to migrate into a non-SQL database you're going to need a custom id_map plugin!

Once I found this plugin I was able to find out where it created its table and the index information. Bingo! \Drupal\migrate\Plugin\migrate\id_map\Sql::ensureTables

   * Create the map and message tables if they don't already exist.
  protected function ensureTables() {

In this method, it creates the schema that the table will use. There is some magic in here. By default, all keys are treated as a varchar field with a length of 64. But, then, it matches up those keys with known destination field values. So if you have a source value going to a plain text field it will change to a length of 255.

      // Generate appropriate schema info for the map and message tables,
      // and map from the source field names to the map/msg field names.
      $count = 1;
      $source_id_schema = [];
      $indexes = [];
      foreach ($this->migration->getSourcePlugin()->getIds() as $id_definition) {
        $mapkey = 'sourceid' . $count++;
        $indexes['source'][] = $mapkey;
        $source_id_schema[$mapkey] = $this->getFieldSchema($id_definition);
        $source_id_schema[$mapkey]['not null'] = TRUE;

      $source_ids_hash[static::SOURCE_IDS_HASH] = [
        'type' => 'varchar',
        'length' => '64',
        'not null' => TRUE,
        'description' => 'Hash of source ids. Used as primary key',
      $fields = $source_ids_hash + $source_id_schema;

      // Add destination identifiers to map table.
      // @todo How do we discover the destination schema?
      $count = 1;
      foreach ($this->migration->getDestinationPlugin()->getIds() as $id_definition) {
        // Allow dest identifier fields to be NULL (for IGNORED/FAILED cases).
        $mapkey = 'destid' . $count++;
        $fields[$mapkey] = $this->getFieldSchema($id_definition);
        $fields[$mapkey]['not null'] = FALSE;

This was my issue. The destination fields are the term name field, which has a length of 255. Great. But there is no way to interject here and change that value. All of the field schemas are coming from field and typed data information.

The solution? Make my own plugin. The following is my sql_large_key ID mapping class.


namespace Drupal\mahmodule\Plugin\migrate\id_map;

use Drupal\migrate\Plugin\migrate\id_map\Sql;

 * Defines the sql based ID map implementation.
 * It creates one map and one message table per migration entity to store the
 * relevant information.
 * @PluginID("sql_large_key")
class LargeKeySql extends Sql {

  protected function getFieldSchema(array $id_definition) {
    $schema =  parent::getFieldSchema($id_definition);
    if ($schema['type'] == 'varchar') {
      $schema['length'] = 100;
    return $schema;


The following is an example migration using my custom idMap definition.

id: company_location6
status: true
  - company
  plugin: sql_large_key
  plugin: csv_by_key
  path: data/company.csv
  header_row_count: 1
 # Need each unique key to build proper hierarchy
    - Location1
    - Location2
    - Location3
    - Location4
    - Location5
    - Location6
    plugin: skip_on_empty
    method: row
    source: Location6
    plugin: default_value
    default_value: locations
  # Find parent using key from previous level migration.
      plugin: migration
        - company_location5
          - Location1
          - Location2
          - Location3
          - Location4
          - Location5
    plugin: default_value
    default_value: 0
    source: '@parent_id'
  plugin: 'entity:taxonomy_term'
migration_dependencies: {  }

And, voila! 6,000 locations later there is a preserved hierarchy!

Dec 01 2017
Dec 01

I just read a quick post over on another Drupal shop's blog, Be a Partner, not a Vendor, and added a comment to the great point Dylan made about not limiting your project to the specs.

We've recently started asking our clients directly about their priorities for the project. Not just overall, but specifically for each one -- and particularly how they would rank these three aspects:

  • Timing
  • Budget
  • User Experience

It turns out the answer to this question can vary a lot! And if you're not on the same page with your client, you're probably going to disappoint them.

We find with most of our projects that delivering the base functionality is generally really straightforward, and we almost always nail the budget. But it turns out that the base functionality often is not as user-friendly as our clients pictured. Getting really nice, usable interfaces can take a lot more effort to get right, than just delivering a working, functional solution. This is the grey area where disagreements and missed expectations grow.

It's particularly a challenge for budget-sensitive clients. Lots of people come to us looking for Facebook-like user experiences, but they certainly don't have the budget to have a development team the size of Facebook working full time! We can provide an amazing amount of functionality on shoestring budgets, but that doesn't mean they're going to be as polished as what software-as-a-service providers build with multiple in-house developers.

Which are your priorities?

This model is not new. It's commonly called the Quality Triangle or Project Management Triangle -- "Fast, cheap, or good -- pick 2." However, instead of just identifying the lowest priority, we think ranking all three is more useful. If we take these 3 priorities, there are 6 different ways of ordering them:

  • Fast, within budget, cut scope
  • Strict budget, get launched, cut scope
  • Fast, high quality, add budget to get there
  • Premium user experience, get launched, ok to go over budget
  • Strict budget, high quality, take as long as you need
  • Premium user experience, keep to budget, willing to wait

Let's drill down into these project priorities just a little more, and give them names.

The Agile Entrepreneur - Fast and within budget

If you prioritize speed over all else, and don't mind reducing scope, you're probably into Agile. "Launch early and often", "if you're not breaking things you're not moving fast enough" and other slogans fit this type of prioritization, and it's pretty much the Silicon Valley mindset these days -- many of the most successful Internet companies started out this way.

The Hustler - Spend the least amount to get a result quickly

Starting with little or no budget, the hustlers get out and get to work, creating value out of not much resources. They may not have the budget to spend early on, but they emphasize getting something up and working so they can grow and add more functionality later, even if the first few releases are garbage -- having a presence is more important than getting it right.

The Mogul - Get there fast, don't care how much it costs

This clearly describes -- throw money at the problem with speed being the highest priority. Get something out the door and fix it later -- the biggest difference between the Mogul and the Agile entrepreneur is how much capital they have to throw at the problem.

The Visionary - It's got to be the best, get it going quickly

With more of an emphasis on quality, while still getting the project done and launched, these kinds of visionaries are among our favorites to work with. These are the sites that win awards, let us stretch our toolbox, and can be really fun to do.

The Main Street business - Will wait for a decent result, but it's got to be within my budget

We are quite accustomed to working with businesses with a very limited budget who want to get the best result possible. The web is developing so rapidly, things are changing so quickly, that cutting-edge sites of five years ago cost a quarter of what they used to cost. We can spin up basic sites in an afternoon. But if you want sophisticated user experiences, you might have to wait until there's a publicly-available module we can drop into your site, if you don't have the budget for us to develop it.

The Craftsman - Get the best result, however long it takes

We recently launched a site that was over 2 years in development. Most of the delay was our client going quiet on us, and not having their content ready to go, but part of it was the pixel-perfect, exacting design priorities they had, and not a lot of budget to spend on it. Eventually their priorities changed to more of the "Visionary" that needed to actually get it launched, they freed up some budget, and we got them live -- but if you want the best result on no budget, it's probably best to learn how to use the tools to build it yourself!

Or, another alternative these days is crowd-sourcing. We're quite interested in working with companies or organizations with a need, that many other organizations might share -- while you may not have the budget to get the job done, if you can help us reach out to other organizations who might be able to chip in, you could pool resources to make it something we can deliver. Obviously, this takes time, but it's a very interesting model...

Which one describes you -- today?

I can think of a client for every one of these categories -- we can usually find a way to work well with any of these. Where things go downhill is when we either misunderstand a client's priorities (which is why we now ask for these up front) or if we fail to manage to what we know are a client's priorities (allowing a "main street" customer to expand the scope beyond the budget they were willing to spend).

Internally, we definitely fall into the "Craftsman" category, building out our stuff, usually too slow, but always at the highest quality. Our favorite clients are the Visionaries -- the clients who want the highest quality, and don't mind spending to get it. But our successes go across the board.

Which one are you?

Feel free to drop us a line, or give us a ring at 206-577-0540.

Dec 01 2017
Dec 01

Great designers know that successful solutions begin with being a great listener. Whether we’re listening to stakeholders, the users' needs, or the trends and opportunities in the market, each voice is an important building block, informing the holistic design system. Usability testing is one way we listen to our audience’s feedback throughout the design process, with the goal of determining if a specific website experience is meeting the interviewees' needs, is easy to use and hopefully, even creates delight or surprise.

Usability testing, though, has a reputation for being time-consuming and expensive, and therefore it can be difficult to secure its spot in a design contract. Over the last few years, our design team has prioritized discovering new ways of conducting usability tests in the most efficient, yet effective, way possible. We’ve learned how to quickly organize tests throughout the design process as questions arise, employ tools that we already use on a daily basis, and utilize templates from previous projects wherever applicable.

With these tricks in our back pocket, we can spin up a usability test in minutes instead of hours. Having your preferred testing process and tools ready to go before the design project begins will help ensure that even the leanest-of-lean testing actually takes place, even if there is not a formal line item for testing in the project contract. 

There are a handful of different ways to conduct a usability test and all approaches fall into a few categories: remote vs. in-person, automated vs. moderated. Choosing the best user-testing method depends on a number of things, like the type of asset you have to test (e.g. a sketch, a static wireframe, a prototype), the nature of the question you want to ask (e.g. navigation usability vs. marketing messaging effectiveness), the amount of time you have and whether you have physical access to your participants. Learn more about this process from Nielson Norman Group's Checklist For Planning Usability Studies

Because testing can be difficult to squeeze in early scoping phases, we’re most often conducting remote, moderated tests. In short, remote tests are cost-effective and save time when your audience is geographically diverse. Moderated tests can also save time in test preparation, and we like learning things we did not expect to learn when we can actually be present with the user as opposed to sending out an automated test.

Although our usability testing toolset often changes as we continue to learn and refine our process, one set of tools you’ll find us using for recording remote, moderated usability tests are Invision App and ScreenFlow

Using Invision App and ScreenFlow fit into our principles for choosing usability testing tools: 

  • Efficient: It needs to be as quick and easy to set up and conduct as possible. 

  • Participant-friendly: It needs to feel easy for the participants to quickly get setup (for example, has a lot of great features, but requires Google Chrome and downloading an extension for users to get up and running). 

  • Inexpensive: We want testing to create value, not add cost. As often as possible, we’re looking to use tools that we already employ on a daily basis. 

  • Flexible: We need to be able to test on various devices (e.g. mobile, tablet, desktop) as well as different types of assets: from simple paper sketches to a clickable digital prototype.

Let’s take a look at how to conduct remote, moderated usability tests with Invision App and ScreenFlow.

Setting up a Test with Invision App and ScreenFlow

1. Close all apps on your computer (e.g. messenger, slack). Disable all notifications as well (on a Mac, option-click the little notification list icon on the top right of the menu bar). We want to make sure notifications are not popping up while recording the test.

2. Spin up an Invision Liveshare presentation. You can also create Liveshares with mobile projects. Invision Liveshares work best for testing static sketches, wireframes, mockups, or clickable prototypes built in Invision. If you have an HTML prototype, consider using a screen-sharing application (e.g. Google Hangout, GoToMeeting, Slack Calls) to watch the user interact with the prototype, or choose to conduct an automated test. 

Ask the interviewee to select the cursor icon in the Invision Liveshare toolbar. You will be able to see your cursor as well as the user’s cursor on the same screen, whereas in a screen sharing application like Google Hangouts or GoToMeeting, you can only see the cursor of the person who is sharing their screen. 

Tugboat Liveshare 3. Setup your collaborative note-taking app of choice. We love Dropbox Paper for its simplicity. Google Docs works well too. You may choose to invite another co-worker or friend to be the note-taker so that you can focus on interacting with the interviewee. Setup your screen so that the Liveshare and note-taking app are visible.  Tugboat Liveshare with Notes

4. Create a conference call line. Send conference call line information to your interviewee. You can also use the call feature in the Invision Liveshare. Although lately, we have used Uber Conference just to be sure that we have a backup recording. Loading the Invision Liveshare for the first time can also be a new experience for interviewees, and we find it more bulletproof to be on the phone while they are getting set up. We usually send an email and a calendar event with this call information included a few days in advance, as well as resending a few minutes before the call. Be sure to dial the conference call via your computer, or VOIP (e.g. Uber Conference, Google Voice, Skype, Slack Call), and not on your phone. Dialing in on your computer will allow ScreenFlow to pick up the call audio.

Uber conference

4. Use ScreenFlow to record your screen and call audio. Make sure both “record audio from” and “record computer audio” boxes are checked in order to capture the interviewee's voice as well as yours. Remember to ask the interviewee for permission to record, even if you are not planning to share the video and are simply wanting to check your notes afterward. Also, consider scheduling a conference call test run with a friend or coworker to test that audio is being recorded correctly with ScreenFlow.


5. Host the recorded files securely and privately. Export your ScreenFlow video file and host via your provider of choice. We usually use YouTube (remember to make sure the video is privately listed, meaning that no one can find the video unless they have a link), or Dropbox. Both allow us to easily share links if necessary.

YouTube Usability testing adds incredible value to the design process and most of all can save a lot of time and heartache down the road. And it does not have to be expensive or time-consuming! Explore and practice ways you can utilize tools you already use on a daily basis so that when the time comes, you can easily organize, record and share a usability test as easily and quickly as possible.

More Resources

But wait! There's more!

Curious about the project referenced in the images used in this article? That would be our soon-to-be-released product called Tugboat, the solution for generating a complete working website for every pull request. Developers receive better feedback instantly and stakeholders are more connected enabling confident decision-making. Sign up for our mailing list to receive Tugboat product and release updates!


About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web