Apr 06 2019
Apr 06

Having engaged in Human-Centered Design Workshops for web development with amazing clients from every sector, our overarching discovery is this: great websites are made before a line of code is ever written.
 
Human-Centered Design work lays the groundwork for a vast expansion of transformative possibilities.

In its simplest terms, Human-Centered Design is a discipline directed toward solving problems for the people who actually use the site. The focus on the end user is a critical distinction that calls for everyone on the project to let go of their own preferences and empathetically focus on optimizing the experience for the user. 
 
When we engage in Human-Centered Design activities with clients, it becomes very clear very early into the process that getting rid of assumptions and continuous questioning about users and their needs, eliminates blind spots and opens doors with new insights about the concerns, goals, and relevant behaviors of the people for whom we are developing sites. 

And while Human-Centered Design is generally viewed as a discipline for solving problems, we find it to be much more of a roadmap for creating opportunities.
 
Here’s our 7-step Human-Centered Design process for creating optimal web experiences.

1.    Build empathy with user personas

The first and most essential question: For whom are we building or redesigning this site? We work with clients to identify their most important personas. Each persona gets a name and that’s helpful in keeping us constantly focused on the fact that real people will be interacting with the site in many different ways. 

Following the identification of the key persona groups, we proceed to dig deep, asking “why” and “how” concerning every aspect of their motivations and expectations.

2.    Assess what user personas need from your site

Understanding of and empathy for user personas dovetails into an analysis of how they currently use the site, how that experience can be improved, and how enhancing their experience with the site can drive a deeper relationship. 

We continue to refine and build upon these personas during every phase of the development process, asking questions that reflect back on them by name as we gain an increasing level of empathy. As a result, our shared understanding of the needs and concerns of our persona groups helps to direct development and design with questions such as:

  • “Will this page navigation be clear enough for Clarence?”
  •  “How else can we drive home the point for Catherine that we are an efficient, one-stop-shop for the full spectrum of her financial needs? 

This level of inquiry at the front end might feel excessive or out of sync with what many are accustomed to. Invariably, however, establishing a shared language streamlines development moving forward, while laying the groundwork for solutions that meet the needs of users. 


As Tom Gilb, noted in Principles of Software Engineering Management, getting it right at the beginning pays off. According to Tom, fixing a problem discovered during development costs 10 times as much as addressing a problem in the design phase. If the problem is not discovered until the system is released, it costs 100 times as much to fix. 

3.    Map their journey through the site to key conversions

Just as your user groups do not all fit the same mold, what they are looking for from your site will vary, depending on what phase they are in relative to their relationship with your organization – what we refer to as the user journey. 

Too often, website design focuses on one aspect of the user journey. It needs to be viewed holistically.

For example, if the purchase process is complicated and cumbersome, efforts to successfully provide users with the right information delivered in the right format at the start of their journey runs the risk of unraveling. 
 

4.    Identify obstacles in their path

Next step: identify challenges. We map user journeys through every phase, aiming for seamless transitions from one phase to the next.

This step calls for continuous inquiry along with a commitment to not defend or hold on to assumptions or previous solutions that may not be optimal.

  • What have we heard from clients? 
  • Where have breakdowns occurred in conversions and in relationships?
  • How can we fix it with the messaging, design, or the functionality of the website?  

 

5. Brainstorm solutions

Participants are primed at this point for an explosion of ideas. Mindsets are in an empathetic mode and insights have been collected from multiple angles. 

Now is the time to tap into this energy.

We Invite all participants to contribute ideas, setting the basic ground rules for brainstorming. 

Good ideas spark more good ideas and spark excitement about new possibilities.

6. Prioritize Solutions 

No bad ideas in brainstorming, but in the real world of budgets and time, questions such as “how,” “what’s the cost,” “where to begin,” and “what will have the best impact,” need to be considered. 

As ideas are synthesized, these answers will begin to take shape.  

Real world prioritization happens as we clarify whether a client’s objectives will be best met with the development of a new site or revisions to an existing site. Do we want to move forward with “Blue Sky” development that is not grounded in any specific constraints, or a “Greenfield” project that it not required to integrate with current systems?

What does the playing field look like?


7. Create a Roadmap for Development

Too often, web design and development begins at this step. 

With Human-Centered Design many hours, if not days, of research, persona development, empathetic insights, journey mapping, solution gathering, collaborative energy, and excitement about what’s to come have already been invested when we get to this point. 

As a result, clients have the advantage of moving forward with a high degree of alignment among stakeholders, along with a conviction of ownership in an outcome that will be new, and enhance both the experiences and relationships with the humans who rely on the site. 

Want to ensure that humans are at the center of your next design or development project? That’s what we do (among other things). Contact us today

Better yet! If you are at DrupalCon this week, come over to booth 308. We'll be engaging in a human-centered design activity and you'll have the chance to witness human-centered design in action. I look forward to meeting you!


 

Apr 05 2019
Apr 05

It’s been about a year since Nikki and I HAX‘ed DrupalCon Nashville. Now, Mike and I are about to embark to DrupalCon Seatle to once again, HAX all the things. HAXeditor has come a really long way since the last reveal. In the time since last year:

Web component oppurtunities at DrupalCon to connect with us and others

If you’d like to see what we’ve been up to and what we’re doing in the web components space (where we spend 99% of our day now), watch this video below that shows the theme layer in HAXcms and how it separates state from design.

Apr 05 2019
Apr 05

It’s been a busy start to 2019. I’ve ticked a few boxes on the initiative, attended four camps (one of which I am lead organizer) and learned the intricacy of preparing international shipping documents. I personally captured 169 sessions this year, bringing the total to just shy of 1,500.

This is my first update to the (unofficial) Drupal Recording Initiative, which is outlined on Open Collective.

Training and mentorship

I’ve started asking camps ahead of time who from their team (volunteers or organizers) they can identify as willing and interested in learning the setup and helping me on-site. Setup is relatively easy, but it’s the hour-by-hour troubleshooting that keeps the capture rate high. For shipped kits, adding the video call to review the equipment and setup seemed like an easy win.

Expanded coverage

A few non-Drupal events have reached out:

  • via BADCamp, I was contacted by someone recording symposia for an HIV-related project at the University of California San Francisco

  • via TCDrupal, contacted by a person helping organize DevFestMN (this past February), which took place at the same venue as TCDrupal

We created the first equipment hub: the Drupal Swiss Association purchased four kits that I configured and shipped for Drupal Mountain Camp. Learning how to prepare the documentation for international shipping was not fun, but if I stick with DHL the next one should be easier.

Plus, I’ve started reaching out to camps not already on my list.

Improved documentation

No update here. I recorded the video call with Mountain Camp, but then immediately deleted it because there was no way it would have been useful. Off-the-cuff wasn’t the way to go. 

Higher recording success rate

Still stumped here. So far, this year is my best yet at 100% capture of four events. But things fell apart at Mountain Camp. And there was about a 90% capture rate at DevFestMN, though no sound on any of the screen records, meaning every video needed to be fixed in post (which is happening at the rate of about two videos per week).

Streamlined funding

Moving my accounting to Open Collective has been mostly successful. The increase in costs to each camp has been (in general) minimal and nearly all camps have been able to accommodate. I don’t like the fact that the budget shows a positive amount when expenses have been filed against the collective (for which I’ve filed a feature request). Currently, it looks like things are rosy in recording-land, but I’m actually running up my credit card balance booking travel for camps and awaiting reimbursement. While the prior GoFundMe campaign is still out there, it is nice having the reimbursements and (recurring!) donations in one place.

Overall organization

Not much to report, but I’m playing around with a github project to track events as issues

Content discoverability

While I continually get credited for Drupal.tv, I only offered input and a bunch of YouTube playlists to Ashraf Abed of Debug Academy. And it was his students that built it. Now that we have Drupal.tv, I mention it often and also import each camp playlist as part of my process.

Thanks!

Thanks to all my current Open Collective backers. If you value the recordings and the efforts I’m making towards expanding this effort beyond just what one person can do, please consider contributing.

See you at DrupalCon?

I will be participating in two BoFs at DrupalCon Seattle related to the recording initiative. On Wednesday afternoon at 5:30pm is the Open Collective BoF, to discuss how and why the Webform Module, SimplyTest.me, and the recording initiative are all using Open Collective. And then on Thursday morning at 9:45am, I will discuss the recording initiative for anyone who is interested. 

Apr 05 2019
Apr 05

Midwest Drupal Camp (MidCamp) 2020, on March 18-21, will be the seventh-annual Chicago-area event that brings together designers, developers, users, and evaluators of the open source Drupal content management software. Attendees come for four days of presentations, professional training, contribution sprints, and socials while brushing shoulders with Drupal service providers, hosting vendors, and other members of the broader web development community.

Early-bird Deal!

For 2020, tables will be exclusive to Core sponsors, and the early-bird rate will be $2500, increasing to $3000 on June 1, 2019. Core sponsors will have the option to add on $1000 for "naming rights" to any component of camp they choose—a party, Training Day, Contribution Day, snacks, coffee, or something else.

If you'd like to lock in your table now, send us an email to [email protected] and we'll get an invoice out to you ASAP.

Apr 05 2019
Apr 05

In this project we had built a collection of components using a combination of Paragraphs and referenced block entities. While the system we built was incredibly flexible, there were a number of variations we wanted to be able to apply to each component. We also wanted the system to be easily extensible by the client team going forward. To this end, we came up with a system of configuration entities that would allow us to provide references to classes and thematically name these styles. We built upon this by extending the EntityReferenceSelections plugin, allowing us to customize the list of styles available to a component by defining where those styles could be used.

The use of configuration entities allows the client team to develop and test new style variations in the standard development workflow and deploy them out to forward environments, giving an opportunity to test the new styles in QA prior to deployment to Production.

The Styles configuration entity

This configuration entity is at the heart of the system. It allows the client team to come in through the UI and create the new style. Each style is comprised of one or more classes that will later be applied to the container of the component the style is used on. The Style entity also contains configuration allowing the team to identify where this style can be used. This will be used later in the process to allow the team to limit the list of available styles to just those components that can actually make use of them.

The resulting configuration for the Style entity is then able to be exported to yml, versioned in the project repository and pushed forward through our development pipeline. Here’s an example of a Style entity after export to the configuration sync directory.

uuid: 7d112e4e-0c0f-486e-ae36-b608f55bf4e4
langcode: en
status: true
dependencies: {  }
id: featured_blue
label: 'Featured - Blue'
classes:
  - comp__featured-blue
uses:
  rte: rte
  cta: cta
  rail: '0'
  layout: '0'
  content: '0'
  oneboxlisting: '0'
  twoboxlisting: '0'
  table: '0'

Uses

For “Uses” we went with a simple configuration form. The result of this is form is stored in the key value store for Drupal 8. We can then access that configuration from our Styles entity and from our other plugins in order to retrieve and decode the values. Because the definition of each use was a simple key and label, we didn’t need anything more complex for storage.

Assigning context through a custom Selection Plugin

By extending the core EntityReferenceSelection plugin, we’re able to combine our list of Uses with the uses defined in each style component. To add Styles to a component, the developer would first add a entity reference field to the the Styles config entity to the component in question. In the field configuration for that entity reference field, we can chose our custom Selection Plugin. This exposes our list of defined uses. We can then select the appropriate use for this component. The end result of this is that only the applicable styles will be presented to the content team when they create components of this type.

<?php

/**
 * Plugin implementation of the 'selection' entity_reference.
 *
 * @EntityReferenceSelection(
 *   id = "uses",
 *   label = @Translation("Uses: Filter by where the referenced entity will be used."),
 *   group = "uses",
 *   weight = 0
 * )
 */
class UsesSelection extends SelectionPluginBase implements ContainerFactoryPluginInterface {

  use SelectionTrait;

  /**
   * {@inheritdoc}
   */
  public function buildConfigurationForm(array $form, FormStateInterface $form_state) {
    $form = parent::buildConfigurationForm($form, $form_state);

    $options = Styles::getUses();
    $uses = $this->getConfiguration()['uses'];

    if ($options) {
      $form['uses'] = [
        '#type' => 'checkboxes',
        '#title' => $this->t('Uses'),
        '#options' => $options,
        '#default_value' => $uses,
      ];
    }
    return $form;
  }

  /**
   * {@inheritdoc}
   */
  public function getReferenceableEntities($match = NULL, $match_operator = 'CONTAINS', $limit = 0) {
    $uses_config = $this->getConfiguration()['uses'];

    $uses = [];
    foreach ($uses_config as $key => $value) {
      if (!empty($value)) {
        $uses[] = $key;
      }
    }

    $styles = \Drupal::entityTypeManager()
      ->getStorage('styles')
      ->loadMultiple();

    $return = [];
    foreach ($styles as $style) {
      foreach ($style->get('uses') as $key => $value) {
        if (!empty($value)) {
          if (in_array($key, $uses)) {
            $return[$style->bundle()][$style->id()] = $style->label();
          }
        }
      }
    }
    return $return;
  }

}

In practice, this selection plugin presents a list of our defined uses in the configuration for the field. The person creating the component can then select the appropriate use definitions, limiting the scope of styles that will be made available to the component.

Components, with style.

The final piece of the puzzle is how we add the selected styles to the components during content creation. Once someone on the content team adds a component to the page and selects a style, we then need to apply the style to the component. This is handled by preprocess functions for each type of component we’re working with. In this case, Paragraphs and Blocks.

In both of the examples below we check to see if the entity being rendered has our ‘field_styles’. If the field exists, we load its values and the default class attributes already applied to the entity. We then iterate over any styles applied to the component and add any classes those styles define to an array. Those classes are merged with the default classes for the paragraph or block entity. This allows the classes defined to be applied to the container for the component without a need for modifying any templates.

/**
 * Implements hook_preprocess_HOOK().
 */
function bcbsmn_styles_preprocess_paragraph(&$variables) {
  /** @var Drupal\paragraphs\Entity\Paragraph $paragraph */
  $paragraph = $variables['paragraph'];
  if ($paragraph->hasField('field_styles')) {
    $styles = $paragraph->get('field_styles')->getValue();
    $classes = isset($variables['attributes']['class']) ? $variables['attributes']['class'] : [];
    foreach ($styles as $value) {
      /** @var \Drupal\bcbsmn_styles\Entity\Styles $style */
      $style = Styles::load($value['target_id']);
      if ($style instanceof Styles) {
        $style_classes = $style->get('classes');
        foreach ($style_classes as $class) {
          $classes[] = $class;
        }
      }
    }
    $variables['attributes']['class'] = $classes;
  }
}

/**
 * Implements hook_preprocess_HOOK().
 */
function bcbsmn_styles_preprocess_block(&$variables) {
  if ($variables['base_plugin_id'] == 'block_content') {
    $block = $variables['content']['#block_content'];
    if ($block->hasField('field_styles')) {
      $styles = $block->get('field_styles')->getValue();
      $classes = isset($variables['attributes']['class']) ? $variables['attributes']['class'] : [];
      foreach ($styles as $value) {
        /** @var \Drupal\bcbsmn_styles\Entity\Styles $style */
        $style = Styles::load($value['target_id']);
        if ($style instanceof Styles) {
          $style_classes = $style->get('classes');
          foreach ($style_classes as $class) {
            $classes[] = $class;
          }
        }
      }
      $variables['attributes']['class'] = $classes;
    }
  }
}

Try it out

We’ve contributed the initial version of this module to Drupal.org as the Style Entity project. We’ll continue to refine this as we use it on future projects and with the input of people like you. Download Style Entity and give it a spin, then let us know what you think in the issue queue.

Apr 05 2019
Apr 05

On behalf of the Drupal Association, we’re excited to welcome you to Seattle, DrupalCon, and the Washington State Convention Center. Your week will be packed with opportunities to learn, network, collaborate, and most importantly, have fun... Drupal style!

To add some listening enjoyment to your travels, download the Drupal Seattle Spotify playlist by Kanopi Studios.

Apr 05 2019
Apr 05

Note this is a copy of page: https://joomla.github.io/cross-cms-compliance/drupalprivacyandcrosscmsgroup

Background

At this point in the history of the open web, privacy is arguably the key issue in software development. As a range of scandals arising from the misuse of data bring pressure on governments and civil society to take action, it is important for software projects - including Drupal - to take proactive steps to value, resource, and support privacy work.

To date, the Drupal project has largely been reliant on the community to take the lead on privacy work. Development initiatives on privacy issues have mostly centred around contributed modules to implement privacy standards required by the EU’s GDPR privacy legislation.

The status of Drupal’s work on privacy was discussed at great length at Drupal Europe last year with members of the WordPress and Joomla communities, as well as a variety of community members in Drupal who are continuing to focus on privacy beyond GDPR.

As a result, we created the Cross-CMS privacy group, where participants from a number of open source CMSes learn from each other and work to bring our respective software ecosystems towards a common open standards and principles.

For DrupalCon Seattle we would like to present a core privacy initiative that will bring together some of the existing work in contrib as well as the efforts of the others in the cross-CMS group.

Cross-CMS Privacy Group

We have representatives from the communities of Drupal, Joomla, WordPress and Umbraco meeting regularly on Wednesdays at 2-3pm UTC. It’s only been a few months, but we feel that we’ve achieved quite a bit. We’ve managed to stick to our weekly meetings and found that everyone involved has a passion for privacy generally, not just compliance with a specific set of laws.

We’ve found that although our software and community ecosystems are different, we’ve had to encounter the same set of problems - we’ve just handled things in differing ways. Different CMS’ have focused on different areas, which gives us much to learn from each other. For example, WordPress has done a great deal of work on privacy policies, whilst Joomla has fantastic export and import tools for site admins to manage, Umbraco has put a lot of effort into a Consent API, data export and handling of sensitive data fields.

We have already achieved a number of deliverables since beginning:

  • We’ve been working towards a common understanding of how software projects should define privacy that has been influenced by GDPR but aims to go further than mere compliance here: https://github.com/webdevlaw/open-source-privacy-standards (Special thanks to Heather from the WordPress team)

  • We’ve created a repository for posting minutes, and been producing them weekly here (Special thanks to the Joomla! Team especially Luca and Achilleas): https://github.com/joomla/cross-cms-compliance

  • We’ve created a structure for auditing software extensions that could be used by a Drupal privacy team to audit common modules here : https://github.com/joomla/cross-cms-compliance/blob/master/audit-your-software-extension.md

  • We have begun discussing unified standards for file formats for data portability exports and imports, so that users could, in theory, move their data between sites regardless of CMS

  • We’ve created some internal documents comparing the features of our CMSes with the aim to produce a common blueprint for how software best handles user data and privacy tools. We’ve been compiling some legal examples of times when fines have happened and are working together towards a common goal.

General Points on Privacy

Through our conversations we have become convinced that privacy is no longer just a legal requirement but one of ethical importance. We know that giving users the ability to control their own data, and having means to control their consent, isn’t just about avoiding the proverbial fine. As developers behind some of the largest CMSes in the world, we know that we cannot force website administrators to respect their users’ privacy, but we can at least make it so easy for them to do so that they will need a good reason to not enable these tools.

CMSes can often be the first point of processing an individual’s information. A recent discussion raised by Alan Mac Kenna from the Umbraco CMS community within the group centred on the need to be able to demonstrate accountability for processing not only based on consent, but for other ‘lawful bases’ also, enabling our platforms to become a key source of truth for realising accountability under data protection regulations.

However, putting aside the ethical imperative for privacy tools, there are a number of new legal privacy initiatives currently being worked on (which as of this writing include CCPA, the ePrivacy Regulation revamp, and the growing shape of the eventual US Federal privacy law). Therefore, especially for large organisations and enterprise, core functionality in databases and CMSes will likely be an incentive for future projects and funding.

We feel that the Cross-CMS group will assist their projects to value, resource, and support both the ethical reasons for caring about privacy as well as the business incentives for avoiding legal issues. The more we follow consistent design patterns, open standards, and proactive approaches to legislation, the more all of our clients, users and customers will be protected. Whilst other CMSes will never dictate what Drupal needs to do, we can always benefit from mutual learning and understanding.

We hope that as this initiative grows, we will be able to work in cooperation with regulatory bodies themselves to add further authority to the technical approaches we will take in our software.

Drupal Privacy Initiative Goals

We have a number of potential goals for a Drupal privacy initiative:

  • We want to have a clear roadmap of what features need to be in the Core of Drupal, so that other modules can extend that functionality, and what features can remain officially supported in Drupal.
  • An example is the tools for data erasure and “Right to be forgotten” could be an extension of the existing options given when a site admin cancels a user;
  • Whereas tools to make it easier to import user data from other CMSes could exist in contrib but using a data structure that the majority of major CMSes are using.
  • We want to define what we currently believe are the essential features required to improve a website’s handling of user data and privacy, including:
  • Functionality for logging consent or other legal basis for processing;
  • Functionality for handling the exporting and erasure of user data, taking into account that Drupal stores a lot of data in custom fields or other modules.
  • A privacy team-supported checklist, existing in contrib, to assist organisations in compliance and privacy issues outside of pure tech/code issues.
  • A privacy team which, like the security team, vets submitted modules to see how well they respect privacy requirements, as the WordPress Privacy team does. This could instead be more similar to how the accessibility team operates.
  • Potentially other features such as something like the legal module in core which would allow modules to submit wording for privacy policies, such as what cookies they used and how they handle user data.
  • Build upon the work Brainsum (Peter) and FreelyGive (Jamie or yautja_cetanu) have done on the GDPR module on drupal.org to bring the essential functionality into core where appropriate.
  • We want to create documentation within drupal.org to assist developers, site builders and site administrators alike in understanding the privacy issues which impact Drupal, including understanding what other software does.

Our Next Steps

We hope to follow this blog post up with some detailed presentations on the state of privacy tools in other CMSs with screenshots and a more detailed plan.

Currently the representatives of Drupal in the Cross-CMS Privacy Group are from two companies which worked on the /project/gdpr module and another individual who has worked on various encryption modules. We hope to open this up at Drupalcon Seattle.

Chris Teitzel is representing the initiative at Drupalcon Seattle. Many of the members of the working group are in Europe and while not in physical attendance, have pledged to make themselves available remotely for any discussions that are required and are willing to help in any way.

Chris hopes to bring together enough people to support this so an official initiative can be created.

In the long term we hope to secure funding for the group to cover travel and accommodation expenses for periodic in-person meetups and other directly relevant activities, such as conferences and workshops. We may also seek funding support for our time and labour contributing to opensource privacy, which is already a considerable commitment. We naturally must be careful to consider the values and ethics of any potential sponsors, particularly those which may have a mixed track record on privacy.

Apr 05 2019
Apr 05

Note this is a copy of page: https://joomla.github.io/cross-cms-compliance/drupalprivacyandcrosscmsgroup

Background

At this point in the history of the open web, privacy is arguably the key issue in software development. As a range of scandals arising from the misuse of data bring pressure on governments and civil society to take action, it is important for software projects - including Drupal - to take proactive steps to value, resource, and support privacy work.

To date, the Drupal project has largely been reliant on the community to take the lead on privacy work. Development initiatives on privacy issues have mostly centred around contributed modules to implement privacy standards required by the EU’s GDPR privacy legislation.

The status of Drupal’s work on privacy was discussed at great length at Drupal Europe last year with members of the WordPress and Joomla communities, as well as a variety of community members in Drupal who are continuing to focus on privacy beyond GDPR.

As a result, we created the Cross-CMS privacy group, where participants from a number of open source CMSes learn from each other and work to bring our respective software ecosystems towards a common open standards and principles.

For DrupalCon Seattle we would like to present a core privacy initiative that will bring together some of the existing work in contrib as well as the efforts of the others in the cross-CMS group.

Cross-CMS Privacy Group

We have representatives from the communities of Drupal, Joomla, WordPress and Umbraco meeting regularly on Wednesdays at 2-3pm UTC. It’s only been a few months, but we feel that we’ve achieved quite a bit. We’ve managed to stick to our weekly meetings and found that everyone involved has a passion for privacy generally, not just compliance with a specific set of laws.

We’ve found that although our software and community ecosystems are different, we’ve had to encounter the same set of problems - we’ve just handled things in differing ways. Different CMS’ have focused on different areas, which gives us much to learn from each other. For example, WordPress has done a great deal of work on privacy policies, whilst Joomla has fantastic export and import tools for site admins to manage, Umbraco has put a lot of effort into a Consent API, data export and handling of sensitive data fields.

We have already achieved a number of deliverables since beginning:

  • We’ve been working towards a common understanding of how software projects should define privacy that has been influenced by GDPR but aims to go further than mere compliance here: https://github.com/webdevlaw/open-source-privacy-standards (Special thanks to Heather from the WordPress team)

  • We’ve created a repository for posting minutes, and been producing them weekly here (Special thanks to the Joomla! Team especially Luca and Achilleas): https://github.com/joomla/cross-cms-compliance

  • We’ve created a structure for auditing software extensions that could be used by a Drupal privacy team to audit common modules here : https://github.com/joomla/cross-cms-compliance/blob/master/audit-your-software-extension.md

  • We have begun discussing unified standards for file formats for data portability exports and imports, so that users could, in theory, move their data between sites regardless of CMS

  • We’ve created some internal documents comparing the features of our CMSes with the aim to produce a common blueprint for how software best handles user data and privacy tools. We’ve been compiling some legal examples of times when fines have happened and are working together towards a common goal.

General Points on Privacy

Through our conversations we have become convinced that privacy is no longer just a legal requirement but one of ethical importance. We know that giving users the ability to control their own data, and having means to control their consent, isn’t just about avoiding the proverbial fine. As developers behind some of the largest CMSes in the world, we know that we cannot force website administrators to respect their users’ privacy, but we can at least make it so easy for them to do so that they will need a good reason to not enable these tools.

CMSes can often be the first point of processing an individual’s information. A recent discussion raised by Alan Mac Kenna from the Umbraco CMS community within the group centred on the need to be able to demonstrate accountability for processing not only based on consent, but for other ‘lawful bases’ also, enabling our platforms to become a key source of truth for realising accountability under data protection regulations.

However, putting aside the ethical imperative for privacy tools, there are a number of new legal privacy initiatives currently being worked on (which as of this writing include CCPA, the ePrivacy Regulation revamp, and the growing shape of the eventual US Federal privacy law). Therefore, especially for large organisations and enterprise, core functionality in databases and CMSes will likely be an incentive for future projects and funding.

We feel that the Cross-CMS group will assist their projects to value, resource, and support both the ethical reasons for caring about privacy as well as the business incentives for avoiding legal issues. The more we follow consistent design patterns, open standards, and proactive approaches to legislation, the more all of our clients, users and customers will be protected. Whilst other CMSes will never dictate what Drupal needs to do, we can always benefit from mutual learning and understanding.

We hope that as this initiative grows, we will be able to work in cooperation with regulatory bodies themselves to add further authority to the technical approaches we will take in our software.

Drupal Privacy Initiative Goals

We have a number of potential goals for a Drupal privacy initiative:

  • We want to have a clear roadmap of what features need to be in the Core of Drupal, so that other modules can extend that functionality, and what features can remain officially supported in Drupal.
  • An example is the tools for data erasure and “Right to be forgotten” could be an extension of the existing options given when a site admin cancels a user;
  • Whereas tools to make it easier to import user data from other CMSes could exist in contrib but using a data structure that the majority of major CMSes are using.
  • We want to define what we currently believe are the essential features required to improve a website’s handling of user data and privacy, including:
  • Functionality for logging consent or other legal basis for processing;
  • Functionality for handling the exporting and erasure of user data, taking into account that Drupal stores a lot of data in custom fields or other modules.
  • A privacy team-supported checklist, existing in contrib, to assist organisations in compliance and privacy issues outside of pure tech/code issues.
  • A privacy team which, like the security team, vets submitted modules to see how well they respect privacy requirements, as the WordPress Privacy team does. This could instead be more similar to how the accessibility team operates.
  • Potentially other features such as something like the legal module in core which would allow modules to submit wording for privacy policies, such as what cookies they used and how they handle user data.
  • Build upon the work Brainsum (Peter) and FreelyGive (Jamie or yautja_cetanu) have done on the GDPR module on drupal.org to bring the essential functionality into core where appropriate.
  • We want to create documentation within drupal.org to assist developers, site builders and site administrators alike in understanding the privacy issues which impact Drupal, including understanding what other software does.

Our Next Steps

We hope to follow this blog post up with some detailed presentations on the state of privacy tools in other CMSs with screenshots and a more detailed plan.

Currently the representatives of Drupal in the Cross-CMS Privacy Group are from two companies which worked on the /project/gdpr module and another individual who has worked on various encryption modules. We hope to open this up at Drupalcon Seattle.

Chris Teitzel is representing the initiative at Drupalcon Seattle. Many of the members of the working group are in Europe and while not in physical attendance, have pledged to make themselves available remotely for any discussions that are required and are willing to help in any way.

Chris hopes to bring together enough people to support this so an official initiative can be created.

In the long term we hope to secure funding for the group to cover travel and accommodation expenses for periodic in-person meetups and other directly relevant activities, such as conferences and workshops. We may also seek funding support for our time and labour contributing to opensource privacy, which is already a considerable commitment. We naturally must be careful to consider the values and ethics of any potential sponsors, particularly those which may have a mixed track record on privacy.

More information about the bof can be found here: https://events.drupal.org/seattle2019/bofs/drupal-privacy-initiative

The proposal can be found on drupal.org here: https://www.drupal.org/project/ideas/issues/3009356

Apr 05 2019
Apr 05

You can find many comparisons between popular CMS systems on the internet. Whenever Drupal is mentioned in them, it is always described with words like: safe, open, regularly updated. Today I'm going to explain why it has such an opinion. I will also present evidence that the level of security claimed by the Drupal community is not just empty words.

Here are five reasons behind the Drupal's safety:

1. Open-sourceness

You'll probably say: most CMS systems are distributed using the Open Source model. Why would that be Drupal's advantage? But take a broader look at CMS ecosystems. Almost all are available under open licenses, but their modules, plug-ins and skins are also being distributed using a purely commercial model. In the case of WordPress – the sale of add-ons is quite popular, and it limits the openness of their code. 

The Drupal community has developed a completely different, centralised model. It mostly consists of free add-ons, available directly from the drupal.org website, additionally covered by an internal security programme. Such an approach somewhat complicates the creation of modules and skins, but at the same time – makes it difficult to smuggle malicious code. The centralised model also reveals its advantages in the case of discovering security flaws in Drupal itself. It was perfectly demonstrated by the recent SA-CORE-2019-003 security update, which covered not only the core, but also the popular modules.

2. Security Team

You can often hear about critical errors in Drupal. In 2018 and 2019 we had to deal with some highly critical updates, some of them were even named "Drupalgeddon". However, this is not a result of poor-quality code. It's a result of the titanic work being carried out by the community, especially its part dealing with security.

Drupal's Security Team currently consists of over 30 people from various companies and organisations from around the world. It has earned a well-deserved reputation – through efficiency and downright paranoid concentration on security issues. It is supported by volunteers – also working as part of paid "bug bounty" programmes.

3. Organisations' support

Based on the example of WordPress – security does not go hand in hand with reach. Though the way which a given CMS is being used is very important. Drupal is a system often chosen by large corporations and government agencies (some of them can be found on this list https://groups.drupal.org/government-sites). Even such giants as Tesla, Nokia, Harvard University, London and Los Angeles, as well as NASA have trusted him. Each of these organisations takes good care of the security measures on their websites and conducts innumerable internal audits. The vulnerabilities found are usually passed to the previously mentioned Security Team.

The support of big players is very important and necessary – thanks to this, an open-source project becomes a common good, the development of which will benefit everyone interested. As a curiosity, I would like to mention that at the beginning of 2019 the European Commission announced the EU FOSSA 2 (The European Commission Free and Open Source Software Audit) programme – the second largest beneficiary of which is actually Drupal. As part of the programme, the people tracking errors will receive a monetary reward, the amount of which depends on the significance of the reported vulnerability. There's € 89 000 in the pot.

4. Composer and the components of Symfony

Since version 8, Drupal is integrated with Symfony components. It's a huge leap forward, towards code standardisation. The use of proven modules, such as EventDispatcher, HttpFoundation/HttpKernel and Routing, relieves developers of the need to maintain their own solutions. At the same time, the Symfony community is gaining new developers and new sponsors. Thanks to this, the level of security is constantly rising, because the basic libraries are getting more and more "tight".

Symfony is not only a collection of libraries, it is also equipped with Composer – an excellent package manager created based on the npm model. It was already possible to use it with Drupal 7, but in version 8 it became a downright obligatory addition, without which it is hard to even imagine the ongoing maintenance of pages. It can deal with complex dependencies, downloading libraries other than PHP, patching and running installation scripts. Today, when the Composer's operation is very well-developed, it's safe to say that it's a milestone in updating the Drupal code.

5. Security mechanisms

I've already written about many organisational, money and design factors. I didn't, however, specifically mention what mechanisms protect a webmaster from an attack on his site. Here are some of them:

  • Users' passwords are stored in a hashed form, using salt and multiplicative hash function. This makes brute force attacks more difficult to carry out.
  • The site configuration can be saved to .yml files and compared to its previous version. What's more, you can save it in the code repository using the Features module. It's an excellent solution that greatly limits the possibility of unnoticed database infection.
  • Advanced permission management allows you to define user roles and assign them activities that they can perform on the site. The creators of Drupal put a lot of emphasis on this issue, and today it results in a high degree of security.
  • The error reporting system records every security breach, including missing .htaccess files in sensitive directories.
  • The update system allows for downloading and installing the latest versions of modules from the admin panel. It's very convenient on shared hostings, that do not always have the ability to use Composer.
  • The Twig language used to create templates has advanced mechanisms used to defend against XSS and CSRF attacks.
  • The extensive cache allows you to effectively defend against DoS attacks.
  • It is possible to fully encrypt the database.

It is worth adding that Drupal complies with the OWASP (Open Web Application Security Project) standard, defining the basic security principles that a modern web project must meet.

Apr 05 2019
Apr 05

You can find many comparisons between popular CMS systems on the internet. Whenever Drupal is mentioned in them, it is always described with words like: safe, open, regularly updated. Today I'm going to explain why it has such an opinion. I will also present evidence that the level of security claimed by the Drupal community is not just empty words.

Here are five reasons behind the Drupal's safety:

1. Open-sourceness

You'll probably say: most CMS systems are distributed using the Open Source model. Why would that be Drupal's advantage? But take a broader look at CMS ecosystems. Almost all are available under open licenses, but their modules, plug-ins and skins are also being distributed using a purely commercial model. In the case of WordPress – the sale of add-ons is quite popular, and it limits the openness of their code. 

The Drupal community has developed a completely different, centralised model. It mostly consists of free add-ons, available directly from the drupal.org website, additionally covered by an internal security programme. Such an approach somewhat complicates the creation of modules and skins, but at the same time – makes it difficult to smuggle malicious code. The centralised model also reveals its advantages in the case of discovering security flaws in Drupal itself. It was perfectly demonstrated by the recent SA-CORE-2019-003 security update, which covered not only the core, but also the popular modules.

2. Security Team

You can often hear about critical errors in Drupal. In 2018 and 2019 we had to deal with some highly critical updates, some of them were even named "Drupalgeddon". However, this is not a result of poor-quality code. It's a result of the titanic work being carried out by the community, especially its part dealing with security.

Drupal's Security Team currently consists of over 30 people from various companies and organisations from around the world. It has earned a well-deserved reputation – through efficiency and downright paranoid concentration on security issues. It is supported by volunteers – also working as part of paid "bug bounty" programmes.

3. Organisations' support

Based on the example of WordPress – security does not go hand in hand with reach. Though the way which a given CMS is being used is very important. Drupal is a system often chosen by large corporations and government agencies (some of them can be found on this list https://groups.drupal.org/government-sites). Even such giants as Tesla, Nokia, Harvard University, London and Los Angeles, as well as NASA have trusted him. Each of these organisations takes good care of the security measures on their websites and conducts innumerable internal audits. The vulnerabilities found are usually passed to the previously mentioned Security Team.

The support of big players is very important and necessary – thanks to this, an open-source project becomes a common good, the development of which will benefit everyone interested. As a curiosity, I would like to mention that at the beginning of 2019 the European Commission announced the EU FOSSA 2 (The European Commission Free and Open Source Software Audit) programme – the second largest beneficiary of which is actually Drupal. As part of the programme, the people tracking errors will receive a monetary reward, the amount of which depends on the significance of the reported vulnerability. There's € 89 000 in the pot.

4. Composer and the components of Symfony

Since version 8, Drupal is integrated with Symfony components. It's a huge leap forward, towards code standardisation. The use of proven modules, such as EventDispatcher, HttpFoundation/HttpKernel and Routing, relieves developers of the need to maintain their own solutions. At the same time, the Symfony community is gaining new developers and new sponsors. Thanks to this, the level of security is constantly rising, because the basic libraries are getting more and more "tight".

Symfony is not only a collection of libraries, it is also equipped with Composer – an excellent package manager created based on the npm model. It was already possible to use it with Drupal 7, but in version 8 it became a downright obligatory addition, without which it is hard to even imagine the ongoing maintenance of pages. It can deal with complex dependencies, downloading libraries other than PHP, patching and running installation scripts. Today, when the Composer's operation is very well-developed, it's safe to say that it's a milestone in updating the Drupal code.

5. Security mechanisms

I've already written about many organisational, money and design factors. I didn't, however, specifically mention what mechanisms protect a webmaster from an attack on his site. Here are some of them:

  • Users' passwords are stored in a hashed form, using salt and multiplicative hash function. This makes brute force attacks more difficult to carry out.
  • The site configuration can be saved to .yml files and compared to its previous version. What's more, you can save it in the code repository using the Features module. It's an excellent solution that greatly limits the possibility of unnoticed database infection.
  • Advanced permission management allows you to define user roles and assign them activities that they can perform on the site. The creators of Drupal put a lot of emphasis on this issue, and today it results in a high degree of security.
  • The error reporting system records every security breach, including missing .htaccess files in sensitive directories.
  • The update system allows for downloading and installing the latest versions of modules from the admin panel. It's very convenient on shared hostings, that do not always have the ability to use Composer.
  • The Twig language used to create templates has advanced mechanisms used to defend against XSS and CSRF attacks.
  • The extensive cache allows you to effectively defend against DoS attacks.
  • It is possible to fully encrypt the database.

It is worth adding that Drupal complies with the OWASP (Open Web Application Security Project) standard, defining the basic security principles that a modern web project must meet.

Apr 04 2019
Apr 04

Responsive images overview

As screen resolutions and pixel densities continue to climb year after year, it's becoming more important to deliver the best possible image quality to your visitors. The easy way out is to deliver a single high resolution image, but this can have a real impact on page load time & bandwidth usage, especially for visitors on mobile devices & networks. The better solution is to deliver the appropriately sized image based on the screen width/resolution of the browser. So, instead of always delivering a super high res image to mobile device users (who's browsers will be forced to downsize the image to fit anyway), deliver an image that's better sized for that screen. Smaller resolution images have a much smaller filesize, so your visitors won't have to download as much data and the image will download faster.

Thankfully, a native HTML solution for delivering different images for different browser viewports has existed for years: using the "srcset" and "sizes" attributes of the existing <img> element.

To quickly demonstrate how it works, let's take this super simple scenario of an image on your site that will always be displayed at 100% width of the browser. This is how the image element would look:

<img src="https://bkosborne.com/path/to/fallback.jpg" srcset="/path/to/higher/resolution.jpg 1500w, /path/to/lower/resolution 750w" sizes="100vw"/>

The srcset attribute provides your browser a list of images and how wide each is in real pixels. The sizes attribute tells the browser how wide the image will be displayed after it's been laid out and CSS rules applied to it.

But wait, don't browsers already know how wide an image will be when it's rendered on a page? It's responsible for rendering the page after all! Why can't it just figure out how wide the image will be rendered and then just select the most appropriate image source from the "srcset" list? Why is this "sizes" attribute needed at all?

Well, it's true that browsers know this information, but they don't know it until they have completed parsing all JS and CSS on the page. Because processing the CSS/JS takes a while, browsers don't wait and will instead begin downloading images referenced in your HTML immediately, meaning they need to know what image to download immediately.

In the simple scenario above, the site is designed to always render the image at 100% width via CSS, so we indicate as such by adding a single value "100vw" (vw stands for viewport width) to the sizes attribute. The browser then decides which image to load depending on the width of the viewport when the page is loaded. An iPhone 8 in portrait mode has a "CSS" width of 375 pixels, but it has a 2:1 pixel density ratio (a "retina" screen), which means it can actually display images that are double that width at 750px wide. So the browser on this phone will download the lower resolution version of the image which happens to match exactly at 750px wide. On a 1080p desktop monitor the browser will be wider than 750px wide, so the larger resolution image will be downloaded.

Responsive images delivered in this manner work really well for this simple use case.

Things start to get more complicated when the image being displayed on your site does NOT take up the full width of the browser viewport. For example, imagine a site design where an image is displayed 1500px wide at the desktop breakpoint, but is displayed at 50% width at tablet/mobile breakpoints. Now the image element changes to this:

<img src="https://bkosborne.com/path/to/fallback.jpg" srcset="/path/to/high/resolution.jpg 1500w, /path/to/low/resolution 750w" sizes="(min-width: 1500px) 1500px, 50vw"/>

The sizes attribute has changed to indicate that if the viewport width is at least 1500px wide, then the site's CSS is going to render the image at 1500px and no larger. If the viewport width is lower, then that first rule in the sizes attribute fails, and it falls back to the next one, so the site will render the image at 50% viewport width. The browser will translate that value to an actual pixel width (and take into account pixel density of the device) to select the appropriate image to download.

The problem this creates for dynamic layout builders

Now, imagine a dynamic layout builder tool on a content management system, like the new Layout Builder module for Drupal 8:

layout builder edit page

This great layout tool allows site builders to dynamically add rows and columns to the content region of a page and insert blocks of content into the columns.

One of the "blocks" that can be inserted into a column is an image. How do you determine the value of the "sizes" attribute for the image element? Remember, the sizes attribute tells the browser how wide the image will be when it's rendered and laid out by your CSS. Let's just focus on desktop screen resolutions for now, and say that your site will display the content region at a width of 1500 CSS pixels for desktops. A site builder could decide to insert an image in any of the following ways:

  • Into a single column row (image displays at 1500px wide)
  • Into the left-most column of a 50% - 25% - 25% row (image displays at 750px wide)
  • Into the right-most column of a 33% - 33% - 33% row (image displays at 500px wide)

The value of the "sizes" attribute differs for each of those three scenarios, which means that when Drupal is generating the image element markup, it needs to know the width of the column that the image was placed in.

The Drupal-specific problem is that (to my current knowledge) there's no practical way for the code that generates the image element markup to know information about the column the image was inserted in. Without this knowledge transfer, it's impossible to convey an accurate value for the "sizes" attribute.

Things get even more complicated if you're developing a solution that has to work with multiple different themes, where each theme may have different breakpoints and rules about the width of the content region at various breakpoints.

Moving forward

I think this is a new and interesting challenge, and I don't know that anyone has put much thought into how to solve it yet. I'm certainly hoping others read this and provide some ideas, because I'm not sure what the best solution is. The easy solution is of course to just not output the image responsively, and just use a single image src like the old days. In the example above, the image would need to be 1500px wide to account for the largest possibility.

Apr 04 2019
Apr 04
Go to the profile of Julia Gutierrez

DrupalCon2019 is heading to Seattle this year and there’s no shortage of exciting sessions and great networking events on this year’s schedule. We can’t wait to hear from some of the experts out in the Drupalverse next week, and we wanted to share with you a few of the sessions we’re most excited about.

Adam is looking forward to:

Government Summit on Monday, April 8th

“I’m looking forward to hearing what other digital offices are doing to improve constituents’ interactions with government so that we can bring some of their insights to the work our agencies are doing. I’m also excited to present on some of the civic tech projects we have been doing at MassGovDigital so that we can get feedback and new ideas from our peers.”

Bryan is looking forward to:

1. Introduction to Decoupled Drupal with Gatsby and React

Time: Wednesday, April 10th from 1:45 pm to 2:15 pm

Room: 6B | Level 6

“We’re using Gatsby and React today on to power Search.mass.gov and the state’s budget website, and Drupal for Mass.gov. Can’t wait to learn about Decoupled Drupal with Gatsby. I wonder if this could be the right recipe to help us make the leap!”

2. Why Will JSON API go into Core?

Time: Wednesday, April 10th from 2:30 pm to 3:00 pm

Room: 612 | Level 6

“Making data available in machine-readable formats via web services is critical to open data and to publish-once / single-source-of-truth editorial workflows. I’m grateful to Wim Leers and Mateu Aguilo Bosch for their important thought leadership and contributions in this space, and eager to learn how Mass.gov can best maximize our use of JSON API moving forward.”

I (Julia) am looking forward to:

1. Personalizing the Teach for America applicant journey

Time: Wednesday, April 10th from 1:00 pm to 1:30 pm

Room: 607 | Level 6

“I am really interested in learning from Teach for America on how they implemented personalization and integrated across applications to bring applicants a consistent look, feel, and experience when applying for a Teach for America position. We have created Mayflower, Massachusetts government’s design system, and we want to learn what a single sign-on for different government services might look like and how we might use personalization to improve the experience constituents have when interacting with Massachusetts government digitally. ”

2. Devsigners and Unicorns

Time: Wednesday, April 10th from 4:00 pm to 4:30 pm

Room: 612 | Level 6

“I’m hoping to hear if Chris Strahl has any ‘best-practices’ and ways for project managers to leverage the unique multi-skill abilities that Devsigners and unicorns possess while continuing to encourage a balanced workload for their team. This balancing act could lead towards better development and design products for Massachusetts constituents and I’d love to make that happen with his advice!”

Melissa is looking forward to:

1. DevOps: Why, How, and What

Time: Wednesday, April 10th from 1:45 pm to 2:15 pm

Room: 602–604 | Level 6

“Rob Bayliss and Kelly Albrecht will use a survey they released as well as some other important approaches to elaborate on why DevOps is so crucial to technological strategy. I took the survey back in November of 2018, and I want to see what those results from the survey. This presentation will help me identify if any changes should be made in our process to better serve constituents from these results.”

2. Advanced Automated Visual Testing

Time: Thursday, April 11th from 2:30 pm to 3:00 pm

Room: 608 | Level 6

“In this session Shweta Sharma will speak to what visual testings tools are currently out there and a comparison of the tools. I am excited to gain more insight into the automated visual testing in faster and quicker releases so we can identify any gotchas and improve our releases for Mass.gov users.

P.S. Watch a presentation I gave at this year’s NerdSummit in Boston, and stay tuned for a blog post on some automation tools we used at MassGovDigital coming out soon!”

Lastly, we really hope to see you at our presentations:

We hope to see old friends and make new ones at DrupalCon2019, so be sure to say hi to Bryan, Adam, Melissa, Lisa, Moshe, or me when you see us. We will be at booth 321 (across from the VIP lounge) on Thursday giving interviews and chatting about technology in Massachusetts, we hope you’ll stop by!

Apr 04 2019
Apr 04

With all the excitement about decoupled Drupal over the past several years, I wanted to take a moment to articulate a few specific factors that make headless a good approach for a project – as well as a few that don’t. Quick disclaimer: this is definitely an oversimplification of an otherwise complex subject, and is based entirely on our experience here at Aten. Others will draw different conclusions, and that’s great. In fact, the diversity of perspectives and conclusions about use cases for headless underscores just how incredibly flexible Drupal is. So here’s our take.

First, What is Decoupled?

I’ll keep this real short: decoupled (or headless) Drupal basically means using Drupal for backend content management, and using a separate framework (Angular, React, etc.) for delivering the front-end experience. It completely decouples content presentation (the head) from content management (the body), thus “headless”. There are tons of resources about this already, and I couldn’t come close to covering it as well as others already have. See Related Reading at the end of this post for more info.

Decoupled and Progressively Decoupled

For the purpose of this post, Decoupled Drupal means any Drupal backend that uses a separate technology stack for the front-end. Again, there’s lots of great material on the difference between “decoupled” and “progressively decoupled”. In this post, just pretend they mean the same thing. You can definitely build a decoupled app on top of your traditional Drupal stack, and there are often good reasons to do exactly that.

Why Decoupled?

Decoupled Drupal provides massive flexibility for designing and developing websites, web apps, native apps and other digital products. With the decoupled approach, designers and front-end developers can conspire to build whatever experience they wish, virtually without limitation. It’s great for progressive web apps, where animations, screen transitions and interactivity are particularly important. Decoupled is all but necessary for native apps, where content is typically managed on a centralized server and published via an API to instances of the app running on people’s devices. In recent years, Decoupled Drupal has gained popularity even for more “traditional” websites, again primarily because of the flexibility it provides.

Pros and Cons

I’m not going to list pros and cons per se. Other articles do that. I’m more interested in looking at the specific reasons we’ve chosen to leverage a decoupled approach for some projects, and the reasons we’ve chosen not to for others. I’m going to share our perspective about when to go decoupled, and when not to go decoupled.

When to Decouple

Here are a few key questions we typically ask when evaluating a project as a fit for Decoupled Drupal:

  • Do you have separate backend and front-end development resources? Because decoupled means building a completely separate backend (Drupal) from front-end (Angular, React, etc.), you will need a team capable of building and maintaining both. Whether it’s in-house, freelance or agency support, Decoupled Drupal usually requires two different development teams collaborating together to be successful. Organizations with both front-end devs and backend devs typically check “yes” on this one. Organizations with a few generalist developers, site builders or web admin folks should pause and think seriously about whether or not they have the right people in place to support a decoupled project.
  • Are you building a native app and want to use Drupal to manage your content, data, users, etc.? If yes, we’re almost certainly talking decoupled. See “Why Decoupled?” above.
  • Do you envision publishing content or data across multiple products or platforms? Example: we recently built an education product that serves both teachers and their early childhood students. We needed a classroom management app for the former, and an “activity explorer” with links to interactive games for the latter. Multiple products pulling from a single backend is often a good fit for decoupled.
  • Is interactivity itself a primary concern? There are plenty of cases where the traditional web experience – click a link, load a new page – just doesn’t do justice. Interactive data visualizations and maps are great examples. If your digital project requires app-like interaction with transitions, animations, and complex user flows, you will likely benefit from an expressive front-end framework like Ember or Angular. In those cases, decoupled is often a great fit.
  • Does working around Drupal’s rich feature set and interface create more work in the long run? Drupal ships with a ton of built-in features for managing and viewing content: node edit screens, tabs, subtabs, node view pages, admin screens, and on and on. Sometimes you just don’t need all of that. For some applications, working around Drupal’s default screens is more work than building something custom. In some cases, you may want to take advantage of Drupal’s flexible content model to store content, but need a completely different interface for adding and managing that content. Consider evites as a hypothetical example. The underlying content structure could map nicely to nodes or custom entities in Drupal. The process for creating an invitation, customizing it, adding recipients, previewing and sending, however, is something else altogether. Decoupled Drupal would allow you to build a front-end experience (customizing your invitation) exactly as you need it, while storing the actual content (the invite) and handling business logic (saving, sending, etc.) in Drupal.
  • Do you want the hottest technology? Sometimes it’s important to be at the cutting edge. We respect that 100%. Decoupled Drupal provides incredible flexibility and empowers teams to build rich, beautiful experiences with very few limitations. Further, it allows (virtually requires) your teams – both front-end and backend – to work with the very latest development tools and frameworks.

When Not to Decouple

Conversely, here are a few key questions we ask that might rule out Decoupled Drupal for a project:

  • First, take another look at the list above. If you answered “no” to all of them, Decoupled Drupal might not be a great fit for your project. Still not sure? Great, carry on...
  • Are you hoping to “turn on” features in Drupal and use them more-or-less as-is? One of the major draws for Drupal is the massive ecosystem of free modules available from the open source community. Need Disqus comments on your blog? Simply install the Disqus Drupal module and turn it on. How about a Google Map on your contact page? Check out the Simple Google Maps module. Want to make a slideshow from a list of images? No problem: there are modules for that, too.
    With Decoupled Drupal, the ability to simply “turn-on” front-end functionality goes away, since Drupal is no longer directly managing your website front-end.
  • Do your front-end requirements match Drupal’s front-end capabilities out-of-the-box? We work with a number of research and publishing organizations whose design goals closely align with Drupal’s capabilities. I’m hard pressed to recommend a decoupled approach in those cases, absent some other strong driver (see above).

Related Reading

Apr 04 2019
Apr 04

Webforms for Healthcare

I am looking forward to talking about my experiences in implementing webforms for healthcare.

This presentation will be my first time discussing the Memorial Sloan-Kettering Cancer Center's (MSKCC) implementation of the Webform module. I am even going to show the MSKCC project, which inspired me to build the Webform module for Drupal 8. This presentation will approach the topic of "Webforms for Healthcare" from two different angles.

The three big digital concerns for healthcare

First, we are going to explore Webform features and add-ons to address core digital concerns, accessibility, privacy, and security. Accessibility and privacy are important topics that are going to be discussed throughout the Healthcare summit.

The three big digital concerns for healthcare

The three big digital concerns for healthcare

The three primary audiences for healthcare

Second, we are going to explore how to leverage Webforms to address healthcare's three primary audiences for Clinical Care, Research, and Education. Each one of the audiences has a different use case and requirements. After twenty years of working in the healthcare industry, I also know that doctors need to be included in all aspects of healthcare.

The three primary audiences for healthcare

The three primary audiences for healthcare

Join us at the Drupal Healthcare Summit

Join us at the Drupal Healthcare Summit on Tuesday, April 9th. Throughout the day we are going to talk about patient experiences, education, privacy, inclusion, accessibility, and more…

REGISTER TODAY

If you can't join us at the Drupal Healthcare Summit, you can watch my pre-recorded practice presentation below…

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OKSubscriptions powered by Strikingly

Apr 04 2019
Apr 04

Views added to Drupal core is one of the most frequently mentioned Drupal 8 benefits. Drupal Views gives us a UI for creating data collections based on any desired criteria. One of the ways to fine-tune the results is to use Drupal Views filters. A level higher from regular filters stand contextual filters that accept dynamic values. This helps us create flexible and interesting solutions. Let’s review Drupal 8 Views contextual filters in more detail.

Contextual filters & their difference from regular filters

  • Drupal 8 Views regular filters can only use one static value at a current moment. For example, we can filter all cars with the brand name “Renault.” We can also allow website users to choose this manually if we make the filter “exposed to visitors.”
  • Drupal 8 Views contextual filters are able to use the dynamic values that change with context. For example, each logged-in user can see “Renault”, “Volkswagen,” or/and “Nissan” cars based on the preferences they put in their user profile. So the same Drupal View will show different results to all at the same moment. 

Thanks to the Drupal Views UI, configuring the contextual filters requires no coding skills. They can be created and configured on the “Advanced” tab of the View. However, contextual filters demand a deep understanding of Drupal and should preferably be created by Drupal developers rather than by website administrators.

Contextual filters in Drupal 8 Views UI

An example of creating Drupal 8 Views contextual filters

We will now create a simple task tracker with Drupal 8 Views. On the “My tasks” page, each user will only see the ones assigned to them. This setup will take a few preparatory steps but they will be fun. Those who are impatient can jump directly to the “Contextual filters” part a few paragraphs below. 

1. Preparatory steps for this setup only

1.1. Creating test users

Let’s start with going to “People — Add new user” and creating test users Jack Sparrow and Frodo Baggins. 

Creating a Drupal userCreating a Drupal user (2)


1.2. Creating the “Task priority” taxonomy vocabulary

Every decent task tracker needs task priority options. Let’s go to “Structure — Taxonomy — Add vocabulary” and add these options as taxonomy terms: Blocker, Critical, and Minor.

Taxonomy in Drupal


1.3. Creating the “Task” content type

Let’s create the “Task” content type in “Structure — Content types — Add content type” and add fields to it. The “Title” field will be available by default. 

Here are the needed fields and the field type in brackets:

  • Assigned to (user)
  • Due date (date)
  • Task priority (taxonomy — with the specified vocabulary)
  • Task image
Fields for a content type in Drupal 8

1.4. Creating tasks

Let’s create a few tasks in “Content — Add content — Tasks”, select Jack Sparrow and Frodo Baggins in “Assigned to,” as well as fill in all the other fields. 

2. Creating our Drupal 8 Views contextual filter

2.1. Organizing all tasks into a Drupal 8 View

And we will not organize all tasks into a View in “Structure — Views — Add new view”:

  • View Settings: “Content” of type “Task”
  • Page Settings: Create a Page
  • Page Display Settings: Table of fields
Drupal 8 Views creation

Then we save the View. As it shows fields, we need to add all Task fields to it. And here is how our unfiltered task tracker looks. We only applied a simple sorting by task priority to show “blockers” above “critical” one. But still, our Views shows tasks for all users. 

Drupal 8 Views results

2.2. Creating and configuring a Drupal 8 contextual filter

Both Jack Sparrow and Frodo Baggins would appreciate a page with their own tasks, for which we will use a contextual filter. In “Advanced — Contextual Filters — Add”, we select the “Assigned to” filter argument and save the filter.

Adding a contextual filter in Drupal 8 Views

We are immediately taken to the contextual filter configuration page. 

1) For the “When the filter is not in the URL” option, we select “Provide default value”. There, we select “User ID from logged in user” from the dropdown menu. 

Drupal 8 Views contextual filter


2) For “When the filter is in the URL or default is provided”, we select “Specify validation criteria — User ID.” In the “Action to take if filter does not validate, we specify “Display contents of “No results found.”” 

Drupal 8 Views contextual fllter (2)

2.3. Testing the Drupal 8 contextual filter

Let’s check what all users see if they visit the “my-tasks” page. We visit it in an incognito window and log in as Jack Sparrow. Success — he can only see what’s assigned to him. He will need to save Elizabeth Swann and find the Fountain of Youth.

Contextual filter by user in Drupal 8 Views (2)


Let’s now log in as Frodo Baggins. Good, he knows he needs to take the ring to Mordor and destroy it in the fire of Mount Doom.

Contextual filter by user in Drupal 8 Views

Hopefully, the guys understand their epic missions, so that’s how a properly made contextual filter may save lives ;) A properly made Drupal contextual filter is what you will always have if you contact our Drupal team!

Get assistance with contextual filters

We have shared a simple example of using Drupal 8 Views contextual filters. Our Drupal experts ready to help you configure any kinds of contextual filters for your website and create interesting user experiences.
 

Apr 04 2019
Apr 04

The digital agency field is one that’s in constant flux. It’s very difficult to predict the scope of your work several months in advance. But, of course, this doesn’t mean that you won’t take on the project, even if your resources are lacking. What are you going to do, then?

One possibility is to outsource the project or parts of it to remote partners. You actually have two options here; you can either hire a freelancer or get your developers from an agency that specializes in staff augmentation. 

Naturally, however, working with remote partners is a different process than managing the entire project in-house. Remote staffing entails its own unique challenges that demand adjusting your approach to some degree in order to get the most out of everyone involved in the project.

But, let us put your mind at ease - even these newly incurred challenges can be managed perfectly well. Lucky for you, we know the ins and outs of remote staffing, and have tailored our workflow specifically to accommodate a team of developers working on projects for diverse international clients.

In this post, then, we’ll dive into the most common remote staffing challenges. Our extensive experience on the matter at hand also enables us to provide efficient solutions to all of the challenges that we’ll enumerate and discuss in this post. After reading it, you’ll be equipped with the knowledge to effectively manage your remote teammates without having to worry about all the details of the working arrangement. 

1. Communication

The first and foremost challenge of remote staffing - or any kind of remote work, for that matter - is almost certainly communication. Good communication is an absolute must in order for a project to progress smoothly and launch successfully. We could even go so far as to say that poor communication is what lies at the heart of a lot of unsuccessful projects. 

It’s something that’s extremely important even when managing an in-house team - you can then logically assume that communicating smoothly and effectively with your remote partners is even more essential. 

One of the most frustrating things that can happen when communicating with remote partners is them not responding. Just think of it - hours can go by with you unaware of the progress of their tasks. Naturally, you’ll want some reassurance that you’ll be able to reach your remote workers when you need them.

We at Agiledrop understand how great a concern this is. As such, we make it a point to relay the importance of good communication to all new employees. 

Our developers are always available to the client during their working hours, and they inform the client of any absences (e.g. lunch breaks) they may have. They also synchronize twice each day, once when they begin their day and once when they’re getting ready to leave. 

This way, the client is always brought up to speed on any recent issues and developments, and has a much better overview of the project, as well as a much stronger relationship with the developer themselves. And, as we know, it’s always easier and more satisfying to work with someone you have a good relationship with. 

Another communication-related issue that we need to address is also the remoteness itself. An in-house team is much better at exchanging ideas and sharing their expertise in order to solve problems swiftly and more efficiently; however, a freelancer that you’ve hired, for example, doesn’t have the luxury of discussing things with peers that share a workspace. 

It’s true that the remote workers will usually have access to all of your communication tools, meaning they will technically be able to ask your in-house developers for guidance and/or help. Very often, though, they will instead try to solve the problem on their own - and spend copious amounts of time doing so, resulting in greater costs to you. 

Fortunately, this is rarely the case when working with a team of remote partners such as one provided by Agiledrop. While they will be separate from your in-house team and hence not so prone to exchanging knowledge with them, they will always have their own teammates to turn to and get inspiration from, despite them working on different projects for different clients. 

Therein lies the magic of outsourcing your work to an agency that puts huge emphasis on collaboration and teamwork. Even when hiring just one or two developers, you will benefit from the collective knowledge of their entire team. In this way, you will save both time and money, while at the same time not compromising the quality of the project at all.

2. Culture and location

A challenge that’s still very much tied to communication is the elimination of cultural breach. Logically, it becomes increasingly important the more your remote partner’s culture differs from yours. 

Huge distances between locations - and consequently huge time zone differences - can lead to unwanted hindrances to the project. Fortunately, even seemingly insurmountable cultural differences can be managed perfectly well if you tackle them appropriately. 

The first step in eliminating cultural breach is knowing your remote partner possesses an adequate level of expertise in English. Granted, with English becoming progressively more prevalent and leveraged as a means of international communication (English as a lingua franca), this is likely not something that you’ll need to worry about. 

Very often, a certain level of English is a prerequisite for working at an outsourcing agency. It’s the same at Agiledrop: English proficiency is one of our top priorities when hiring developers. This way we’re able to preselect those that are both fluent in English as well as sociable and outspoken.

With a freelancer, this is slightly different, as there is no supervisor that sets those demands - but, seeing how freelancers are self-managed, you can pretty much expect them to have good communication (and English) skills, since, otherwise, they wouldn’t be able to work effectively. 

Still, it’s wise to get to speak to your remote partner to-be in person, not just via email, but through some form of video chat. Usually, agency leaders will have no problem scheduling a video chat where the potential outsourced developer(s) will also be present. 

However, overcoming cultural breach takes more than just being able to understand one another. A remote worker’s extensive knowledge of English will be of little help to you if you’re unable to reach them. Here, we come to probably the biggest issue in establishing smooth cross-cultural communication: synchronizing both parties and scheduling meetings accordingly. 

This is especially important in the case of large time zone differences, e.g. six hours or more. With an on-site team, this problem wouldn’t even exist; the team share workspaces and, to some extent, also working hours. In this way, even those more spontaneous, unscheduled meetings are possible. 

With a remote worker or a team of remote workers, however, this can prove very hard to achieve. If, for example, your company is based in the US and you decide to outsource a project to a European development agency, you can’t expect the remote developers to be available during the same time slots as your on-site developers.

But what if the needs of your project demand they be present for a meeting that takes place, from their perspective, late in the evening or even at night? You can be almost 100% sure that you won’t get the same quality of input; either they won’t be able to make it to the meeting, or, if they will, their subsequent work may suffer because of a disrupted biorhythm. 

The solution is to coordinate well with your remote partner and establish beforehand what the optimal hours to schedule meetings are. In the case of a large time difference, schedule your meetings for hours which still fall in the scope of your remote partner’s workday. This will give them more than adequate time to both be present at the meeting and continue with their work undisrupted. 

Last, but not least, you’ll probably want to make sure that your remote developers share the values of your in-house team, or at least hold similar ones. Those values can differ greatly from culture to culture, from location to location. Some cultures hold different views on punctuality than others; the same is true for values such as quality and transparency.

The best thing to do is speak with your partner’s leadership about these issues. By learning about the values of your potential partner agency, you’ll be able to select a partner whose vision, mission and values are aligned with those of your organization.

3. Trust

Another major challenge of remote staffing is the inherent uncertainty of it. Ever heard of the expression “don’t buy a pig in a poke”? Well, this is exactly what hiring remote partners can feel like - like buying a pig in a poke, or having no reassurance that what you’re getting is really what you paid for. 

And it’s a perfectly legitimate hesitation. How can you ensure that your remote workers are trustworthy and reliable? How do you know they are as committed to the work and as experienced as your in-house team? Actually - how do you know that your in-house developers are reliable and committed, at that? 

The short answer is that you just have to take their word for it. Usually, you won’t make a final hiring decision until thoroughly researching your new potential employee. But even CVs can be deceiving (pun almost intended) and dishonest. 

You’ll of course have to fact-check the information supplied in the CV. But, even if you find that everything checks out, how can you know that they’re really responsible for, say, the frontend of a website or application? You likely won’t find their signature hiding in the code or even cleverly concealed in the site’s design. 

At least with in-house developers, you’ll get a much better overview of their day-to-day and month-to-month performance. Granted, this will only be possible after they’ve been working with you for some time, i.e. after the investment has already been made. Still, it gives you more power and more control over the progress of the project(s) in which they are involved.

But, with a remote partner, you pretty much have to gamble, right? Well, yes - and, also, no. There might be some risk involved with hiring a freelancer - but you have all their references to check, which will help you make a more informed choice. Also, it’s relatively easy and straightforward to stop working with a freelancer if you’re dissatisfied with their work. 

The biggest risk of hiring a freelancer is actually something else - but we’ll address and discuss it a bit later, when we come to the relevant challenge. Right now, let’s concentrate on how you can make sure that your newly-hired remote partner or team possesses adequate expertise to effectively augment your staff rather than hinder their work. 

Again, this is a concern that we at Agiledrop have already pinpointed and successfully eliminated. Our approach guarantees that our clients always get the best possible people for a certain project; let us briefly describe how we have achieved this. 

The key component of this approach is our very effective training program: all our new developers go through an in-house onboarding project under the supervision of skilled mentors before they start working on any client project. This ensures that they familiarize themselves with all the state-of-the-art tools and practices, and can consequently seamlessly integrate themselves into the client’s team.

Our development leads are the ones responsible for the training of new employees - as such, they are also the ones who can best gauge the competency level and the suitability of a developer for a specific client and/or project. They select the most appropriate person based on actual hands-on experience of working with them, not just on a list of references. 

What this means for you, the client deciding to work with such a remote partner, is that the remote workers’ employers essentially do the fact-checking for you beforehand. All you have to do is check the references of the agency itself, which are quite often much more salient and informative than those listed in a CV. 

And, this agency that others have already been satisfied with then vouches for their personnel - naturally, they would want only competent people on their team, and the careful selection made by the leadership assures that you are provided with only the best of the best. 

The greatest thing about this approach is that it eliminates most of the risk for you. It transforms project outsourcing into an informed purchase rather than a gamble - and, going back to the point made in the intro about the constantly shifting nature of the digital, any degree of reassurance is more than welcome in this era of uncertainty and overabundance of choices. 

Right - we’ve covered the main issue associated with trust, namely, trusting in the competency of your remote partner(s). What about the next step, though - trusting these newly integrated teammates with access to your communication channels, with sensitive private information, trade secrets etc.? 

An employee of an agency will probably have an internal moral obligation to protect the privacy of their enterprise. It’s less likely, however, that their moral compass will be as strict when they work on projects for the agency’s clients. 

Again, the focus shifts to the agency itself: what is their company culture? What values do they hold? Is the importance of privacy clearly communicated to all their new employees? And, are there steps taken to ensure the maximum protection of privacy?

These are all questions you’ll need answers to, especially if the nature of your work demands a very high level of security. It’s vital that you find about out your potential partner’s attitude towards privacy. E.g., if they make their employees sign non-disclosure agreements, this is already a good sign that privacy is something they value. 

It’s even better if they reassure you of their protection of privacy without you having to even ask - if this happens, you can be almost 100% certain that your privacy is in good hands. 

At Agiledrop, newly hired developers sign an NDA pretty much at the same time as their employment contract. Additionally, we handle all our passwords - as well as any clients’ passwords - with password management tools such as LastPass, especially when working from home. And, because of our strong company culture, the moral obligation to our company is extended to all the clients we work with. 

If you want additional protection, you can always add extra security layers to your own channels and services, such as obliging everyone to set up multi-factor authentication or change their password(s) every few months in the case of a longer-term partnership. 

4. Monitoring

This next challenge of remote staffing is actually still tied to trust: effective monitoring of someone who is working remotely. The main difference here is that the meaning of trustworthiness is actually closer to conscientiousness than to honesty; this is why we’re addressing it as a separate challenge. 

Even in the case of international or offshore offices, you’re generally able to monitor all your employees in-house. Having to monitor a team of outsourced remote workers, however, is a completely different beast to tame. 

Without actual physical supervision from your side, how can you be sure that developers working on your project remotely are actually doing the job? How can you know that they don’t just slack off when they turn Slack off? Even with a time-tracking tool such as JIRA or Teamwork, you can never really be certain; and finding out about their inactivity only after seeing a project not completed is not exactly helpful. 

This is likely a bigger issue when hiring a freelancer. Being self-managed, they are left to their own devices, which means you have only negligible supervision over their work. Admittedly, since they are most often experts of a specific field, and since they’re able to work extremely flexible working hours, you can probably expect the work to be done even with very little monitoring from your side. 

Well, but … What if it isn’t? What can you do if the remotely-working freelancer turns out to be a poor investment? Besides already having spent precious resources on them, you will now have to invest even more time and money into rehiring - which will, of course, come neatly packaged with all the hesitations and extra work we’ve outlined before: interviewing, fact-checking, uncertainty - and then some. 

Trying to stay as objective as we can, we believe a better and safer solution would be to partner with a staffing agency. Granted, you’ll face the same issues as with freelancers when it comes to management from your side; but, at the same time, you’ll benefit from the management coming from the agency’s side. 

Of course, your own project managers will be responsible for the project’s smooth progression - but you’ll be able to leave the management of your remote workers to the partner agency. While it’s true that this kind of dual monitoring demands a little extra synchronization, it definitely pays off. And, since the agency’s reputation is at stake, you can expect them to have a well-established system which guarantees the top-notch performance of their employees.

Yep, you guessed it - we have such a system at Agiledrop, and we’re very pleased with its results. The satisfaction of our clients is of paramount importance to us; at the same time, however, we realize how crucial the well-being of our employees is to the success of our clients’ projects. This is why we have devised a company culture that provides only the best for both clients and employees. 

We hold weekly sync meetings and collect feedback from both sides to ensure smooth communication throughout the duration of the project. This also enables us to spot and resolve issues quickly, before they turn into a detriment to the project. If you want to find out more about our company culture, we discuss it in more detail here.

Also, we take the meaning of “remote partners” at face value. After joining your team as a remote teammate, the developer assigned to you will dedicate themselves exclusively to working on your project. As such, you will essentially benefit from their full-time work without the need to micromanage and without worrying about any additional costs.

5. Cost and ROI

This leads neatly into the next challenge that we wanted to point out. While the previous ones were relevant to any kind of remote work, this one is actually more specifically a challenge of remote staffing. We’re talking about the costs incurred by hiring remote teammates via staff augmentation and the return of investment of deciding on this option. 

Here, the questions that you’re probably asking yourself are: how fast will my new remote worker adopt my tools, practices and workflow? How much will I have to invest into them before they are able to do the job that I’m paying for? Will the investment be a worthwhile one - or would I have been better off just growing my in-house team?

We admit that these are indeed important and difficult questions. There’s no universal all-around answer to them, except for “it depends”. As such, we can only speak from our own experience. 

Fortunately, though, experience in this field is something we have loads of. Having worked with a wide range of clients, our developers have familiarized themselves with all the most up-to-date development tools and practices - well, at least with those they haven’t already mastered during their onboarding. 

The entire cost of onboarding is thus already taken care of from our side; all you need to do is to integrate the new developer(s) into your in-house team - but you would’ve had to do this even with a new full-time employee.

What’s more relevant to you, however, is what else you’ll have to take care of when hiring a full-time employee - and, in contrast, what you won’t have to worry about when hiring remote partners. This is likely the main and most attractive reason for outsourcing your work. 

Because, let’s face it - you’ve read through some 2000 words about the challenges of remote staffing - there have to be some glaring benefits to it, too, right? Cause, otherwise, why would so many businesses outsource their work to remote partners?

That’s right - there are obvious benefits! Actually, these are so great that we don’t even have to make a compelling case for them; they just speak for themselves. You probably know what we’re getting at, huh?

While a daily rate for your in-house employee may be lower than that of a remote hire, with the latter this is pretty much the only expense that you’ll have - not as much can be said about the former, though. 

Travel expenses, health insurance, vacation and sick days, the costs associated with onboarding, let alone the necessary equipment… These are just the basics. Don’t forget about teambuildings, healthy office snacks and all the various perks that create a pleasant working environment and take care of the motivation and well-being of your in-house team. It sure adds up - as you’re probably well aware of.

So, your top priority - or one of them, at least - is cutting down on expenses whenever possible; why not go for an option that comes prepackaged with all additional expenses, save the salary, already excluded? 

But wait - there’s more! Referring back to the intro and the unstable nature of the digital - how can you know that you’ll have as much work in, say, half a year as you do now? And, more importantly, what will you do if you don’t?

You likely won’t want to get rid of your talented employees - but, at the same time, it won’t make sense to keep paying their salaries and all their expenses if there’s no work to be done for an indefinite amount of time. 

This will be even more important when you take into account the costs of finding and hiring a full-time employee. Since the demand for developers is already high - and constantly increasing - you can’t even be sure you’ll be able to find a full-time employee in the same area as your offices, which runs the risk of your search being completely fruitless - though no less expensive.

With a remote teammate, it’s a completely different story altogether. Outsourced remote workers are able to easily join your team and just as easily leave it - no hard feelings, no strings attached. And should you ever need to augment your staff again? No problem - agencies usually love working with clients with whom they’ve already established strong, trusting relationships. 

All of this gives you the flexibility to effectively scale when needed, while also greatly reducing most of the costs associated with hiring. And, going back to the “trust” issue, since it’s easier to gauge the competency of the remote hire in such a partnership, this also means that the cost will definitely reflect the quality of the service you are receiving.

6. Unexpected and uncontrollable factors

So far, we’ve covered most of the main questions that likely pop up in your head when deciding for remote staffing. We saved this next challenge for last, though, since it’s a more general one - but, also, just as pertinent. It has to deal with all the various unexpected issues and things that are just, well, out of our control.

For example - what do you do when your newly hired remote worker suddenly falls ill? Or, even worse, what if they’re in an accident? You can’t blame anyone, of course, but the truth of the matter is that your work suffers because of it. 

Here, the distinction between outsourcing work to freelancers versus staffing agencies becomes especially relevant. Remember how we promised to talk more about one of the biggest challenges of hiring a freelancer? Well, since a freelancer is a one-man-band, you’re pretty much screwed if they go on sick leave (or, God forbid, just randomly stop responding - remember how crucial communication is).

When this happens, you need to redo the entire hiring process, which is more complicated and time-consuming with a freelancer or an in-house employee to begin with. Plus, depending on your contract, you’ll probably still have to pay for the freelancer’s incomplete work.

When an in-house employee falls ill or has any kind of medical emergency, it’s also not the best thing in the world for you. In fact, it may even be more costly than with a freelancer - you have to cover their health insurance, as well as pay them the salary during their sick leave. 

And, while you have some reassurance in the fact that they’ll likely get better soon, you still suffer from staff deficit. You can try to patch things up by distributing the person’s tasks among the rest of the team (and even that on condition that the team have the needed expertise), but that will just lead to burnout and a generally poor employee experience.

The best possible solution, then, is definitely partnering with a staffing agency and outsourcing your project(s) to their developers. Since such an agency specializes in staff augmentation, you can count on them to always provide suitable replacements if anything unexpected happens to your current remote hire.

This is exactly the approach we employ at Agiledrop - and it is only made more effective thanks to the onboarding program that we mentioned earlier. The in-depth knowledge of the competency of our developers allows us to not only provide the most adequate person at the start of a working arrangement, but also to ensure that the skillsets of any replacement we have to make match those of the original hire. 

We also do our best to anticipate the unexpected - at least in the realm of what’s under our control. We urge all our employees to notify us of any emergencies as soon as they are made aware of them. In this way, we’re able to remedy the situation and arrange a replacement way before the developer’s emergency can be of any disadvantage to the client. 

This approach eliminates any friction of rehiring, saving cost while not compromising the quality of the services in any way. If you’re able to find a partner agency that can guarantee such a level of aptness and dedication, you’ll know you and your project are in good hands.

In conclusion

There you have it - the six most pressing challenges of remote staffing, coupled with the solutions that agencies which specialize in outsourcing, including Agiledrop, have successfully relied on. We hope this comprehensive blog post has armed you with the necessary savvy to make more informed decisions when outsourcing your work and managing outsourced projects.

Are you currently looking for remote partners to help with your project and this post just sealed the deal for you? We’d be happy to work with you - contact us and we’ll immediately start working on a solution that best fits your needs!

Apr 04 2019
Apr 04

The Drupal community has been abuzz for the past two years with talk of "Becoming Headless" or "Decoupling All The Things." The trend raises reasonable questions from end users who feel this means Drupal is moving into a space that no longer represents them. We hear similar concerns from Drupal Commerce users when we talk about delivering headless commerce. However, we don't believe anyone should be worried that improving support for Drupal as a REST API server will detract from efforts to improve its utility as a traditional CMS.

From our perspective, you can (and we do) support at the same time both traditional applications, where Drupal itself provides the front end, and headless applications, where a JavaScript application renders data served from a Drupal powered REST API. In fact, in this post I'll demonstrate how supporting the latter has actually improved Drupal (and Drupal Commerce) for everyone. Headless initiatives help uncover bugs and fine-tune the underlying application architecture.

Drupal core, API-First, and headless commerce

When you remove the default presentation layer from a system, you are bound to find interesting problems. How much logic is embedded in a Drupal field formatter or field widget? What happens when those are not used to render or manipulate an entity’s values? As a quick example, work in the API-First Initiative related to taxonomy terms turned up a validation bug where code restricting parent terms to the same vocabulary only executed in the context of the default entity form. Fixing that bug to prevent invalid relationships via API updates contributed to improving Drupal core overall even though the issue wasn't affecting traditional Drupal usage.

We're looking for similar wins as we explore improving Drupal Commerce to support headless commerce applications. Not only will it make our code more disciplined, it will help us improve the user experience of the standard modules themselves through the use of JavaScript. I will be speaking about the process a little at DrupalCon Seattle in a session titled Delivering Headless Commerce.

In early 2018 I wrote about why Drupal Commerce needs the API-First initiative and JavaScript Modernisation Initiatives. The outcome a short time later was our progressively decoupled Cart Flyout module which provided a JavaScript enabled take on managing cart interactions. By the end of last summer, the module included an complete replacement for the Add to Cart form that just used JavaScript. This module does not require a fully decoupled architecture but still provides important performance and scalability enhancements to every Drupal Commerce user. However, it did come from our efforts to support a fully headless architecture.

Consider a couple examples of our work toward a fully headless Drupal Commerce improving the modules more generally:

Cart should grant "view" access to orders available from a user's cart session

While working on the Cart API module to find ways to use JSON:API, I realized we were missing some entity access control for carts to allow anonymous users to view their orders tracked in their session. With query-level entity access landing in Entity API, fetching orders over JSON:API or GraphQL automatically restricted the returned orders to carts belonging to authenticated users. We realized we needed to update Commerce Cart to support this use case for traditional and headless use cases.

Provide a constraint on the Coupons field to verify referenced coupons are available

I set a goal earlier this year to support coupon redemption through our Cart API. I ran into a problem early on while evaluating our code to remember where / how we validate coupons. I knew the code existed and expected it'd be pretty simple to reuse. Unfortunately, it turned out we only put validation logic in the default coupon checkout pane code, meaning a coupon redemption endpoint in the Cart API would have to reproduce the code to support client-side coupon validation. That sort of duplication is bound to lead to bugs or out of sync logic.

What was the solution? Add a validation constraint to our coupon reference field on orders. This constraint contains the code that validates a coupon for an order and ensures its related promotion applies. RESTful Web Services module and JSON:API automatically run validation against entities when they're modified, triggering this check and allowing invalid coupons to be detected right away. This in turn let us simplify our coupon redemption form as well. The final patch is still in progress, but once landed, it will make it easier for any Drupal Commerce site, headless or not, to add their own customizations on top of the default coupon redemption form or write their own.

What’s next for 2019?

We've been pretty busy preparing Centarro Toolbox for release at DrupalCon Seattle, so first I expect I'll take a deep breath and celebrate the Drupal 8.7 release. We're planning now how to ensure Drupal Commerce users can harness the power of Layout Builder and Media, and we'll be integrating them into our Belgrade based demo.

Second, we'll continue to improve the developer experience with Drupal Commerce over the various API based modules. Headless Drupal Commerce is already working in the wild. The team behind Spark POS uses JSON:API and Drupal Commerce to handle ZKungFu’s billion dollar business. 1xINTERNET has been pushing React based front-ends for a while and will even be presenting their work at DrupalCon. As project maintainers, we want to empower teams to build similar applications and move our support from product catalog navigation and shopping cart manipulation through to checkout completion and order management.

Apr 03 2019
Apr 03

Electric Citizen is heading to DrupalCon Seattle next week! We're pleased to sponsor again this year, and send several members of the team to represent.

Look for us in the exhibit hall in booth #209, where we'll be sharing some cool EC swag, and looking to make new friends and connections in the Drupal community. This year we'll have some awesome knit hats, handmade in Minnesota, as well as some other goodies.

Keep an eye out for Citizen Dan, Citizen Tim, Citizen Aundrea (DrupalCon newbie!) and Citizen Adam, as we make our way through another DrupalCon.

What We're Looking Forward To

The sessions schedule seems especially dense this year, with one less day available than previous years. But we've got a handful of items already flagged. 

Citizen Dan
I'll be running out, trying to catch as much as possible while also helping out at the booth, but some items on my list:

Citizen Aundrea
When I'm not working the booth, I'd like to catch a few sessions.

Citizen Adam
I always look forward to learning about how others are approaching the challenges the we all face when building beautiful, accessible and performant Drupal sites. This year I'm especially looking forward to:

Citizen Tim
As with most DrupalCon visits I try to focus on where Drupal is going, and not necessarily where it has been. Here are few forward looking sessions I'm anticipating:

Apr 03 2019
Apr 03

JavaScript frameworks have raised the bar of website speed to the sky. Still, it’s just the beginning. GatsbyJS, a tool based on React and GraphQL, impresses the world with fast websites and applications it creates. Let’s take a look at combining Drupal and GatsbyJS to achieve high website speed.

What is GatsbyJS?

GatsbyJS is an open-source React-based framework for building blazing fast websites and applications. Gatsby pulls data from various sources like APIs, decoupled CMSs, databases, markdown, YAML, and more, using GraphQL. 

Gatsby is also called a static site PWA (progressive web app) generator. However, developers can go beyond static sites. Each Gatsby site is a full-fledged React application, so it’s possible to create dynamic apps with it (online stores, blog sites, user dashboards, and so on). 

Created in 2015, GatsbyJS has reached 417,700+ weekly NPM downloads and a 32,600 star rating on GitHub. The showcase of projects built with Gatsby on the official website alone includes 450+ sites.

Its latest version, Gatsby v2, has 130+ default starter kits to quickly start a Gatsby site. Gatsby also has 740+ source plugins to pull data from specific sources (including Drupal). Gatsby websites are also easy to deploy anywhere.

Gatsby and high website speed

“Building blazing fast websites” is part of the official Gatsby slogan. How fast is “blazing fast”? We performed a Google PageSpeed test for Gatsby default starter demo Netlify:

Google PageSpeed test for Gatsby default starter

Here is the same test performed for Gatsby starter blog demo Netlify:

Google PageSpeed test for Gatsby default blog starter


To achieve high website speed results, Gatsby uses:

  • prefetching
  • lazy-loading
  • inlining critical CSS
  • code splitting
  • optimized Webpack configurations
  • server-side-rendering
  • accessible Reach router

and more.

GatsbyJS recognized by the Drupal community

Drupal, with its strong content management features, can provide a powerful data source for the fast and lightweight Gatsby frontend. So the idea of combining Drupal and Gatsby has inspired the Drupal community a lot. Decoupling Drupal 8 with Gatsby has become the topic of speeches and workshops at Drupal meetups. Among them:

Connecting Drupal and GatsbyJS

So let’s take a glimpse at how Gatsby and Drupal are connected. For combining them, we will need:

  • Gatsby website
  • Drupal website
  • Gatsby source plugin for Drupal to connect them

1. Preparing the Gatsby website

To get a Gatsby website, we first need to make sure we have NodeJS and NPM installed. Let’s then install Gatsby command-line interface:

   npm install -g gatsby-cli

We create our new project called “drupal-gatsby”. This command will install the Gatsby default starter from Git. Then we go to its folder:

   gatsby new drupal-gatsby && cd drupal-gatsby

And we run the command to build our website on the server:

   gatsby develop

We are now able to see our Gatsby site with its default starter “astronaut” design at http://localhost:8000/

GatsbyJS website


We will see what we have on the website. There are config files and the src folder with components (header, footer etc.), pages, and images.

The gatsby-config.js file contains the configuration, including the site’s name. Let’s change this name from “Gatsby Default Starter” to “Drupal-Gatsby website by Drudesk” in this file:

  siteMetadata: {
    title: 'Drupal-Gatsby website by Drudesk',
  },


The src/pages folder has the files for all our pages. It’s possible to create new pages right there by copying the existing page files and putting new page names into them. And then, for example, the “news” page will be available at the route “/news.”

The index.js file is where we can change the greeting phrases on the main page. The changes apply instantly, without page reload. 

The “Gatsby develop” command is always active in the terminal and it rebuilds the page in real time after all our changes. So reactive — this is JavaScript, after all!

Gatsby-Drupal website by Drudesk


By visiting http://localhost:8000/___graphql, we see GraphiQL, an in-browser GraphQL IDE to manage our website’s queries. The left side is for queries, the right one is for responses from the server. The “play” button will run the queries. 

The ctrl+space keys offer autocomplete options for queries. It works when we open a curly brace. Let’s choose:

{
allSitePage
}

And we see that GraphiQL returns the results for all site pages:

GraphQL in GatsbyJS

2. Preparing the Drupal website

OK, the Gatsby site is ready, so it’s time to pull Drupal website data. On our Drupal site, we install and enable the JSON API module, which will instantly prepare our API endpoints. As of Drupal 8.7, JSON:API will be included into Drupal core, so there will be no need to install it.

The JSON API Extras module will give us a UI for exact settings. The permissions for JSON:API need to be enabled for anonymous user on our Drupal website.

3. Pulling data from Drupal to Gatsby

For connecting both Gatsby and Drupal together, there is a Gatsby source plugin for Drupal. Let’s install it in our Gatsby site’s root directory.

   npm install --save gatsby-source-drupal

It’s time to tell Gatsby from where to pull data from. In its config.js file, we need to add one more plugin as a code snippet. We include the Drupal source plugin, our Drupal website URL, and JSON:API as apiBase in it. 

{
  resolve: `gatsby-source-drupal`,
  options: {
    baseUrl: `http://*our-drupal-website*`,
   apiBase: 'jsonapi',
  },
},

And we run this magic command again:

   gatsby develop

We will now see in the terminal that Gatsby is “starting to pull data from Drupal.” When we are informed of the successful compilation, we can come back to GraphiQL and see it has many more options for autocomplete. They now include Drupal data.

GraphQL querying Drupal data for GatsbyJS


We query “allNodeArticle” and specify it to show the node title. We see that Gatsby responds with the article titles from our Drupal site. Gatsby has successfully fetched data from Drupal!

GraphQL querying Drupal article titles for GatsbyJS


We can expand and shape our queries to display absolutely anything we like. And we can build whatever Gatsby pages with Drupal data in the src/pages folder by inserting our GraphQL queries into the page files. Finally, the “Gatsby build” command will publish our site.

Assistance with your Drupal and Gatsby setup

This has been just a Drupal-Gatsby sketch, but Gatsby’s settings and opportunities are endless. You can always rely on our Drupal team in any tasks related to your Drupal and Gatsby setup. We will help you in the configuration or create the entire setup from scratch. 

Enjoy high website speed with the latest JavaScript technologies and Drupal!
 

Apr 03 2019
Apr 03

Why would you still want to opt for a Drupal multisite setup? What strong reasons are there for using this Drupal 8 feature?

I mean when there are so many other tempting options, as well:
 

  • you could use Git, for instance, and still have full control of all your different websites, via a single codebase
  • you could go with a Composer workflow for managing your different websites
     

One one hand, everyone's talking about the savings you'd make — of both time and money — for keeping your “cluster” of websites properly updated. And yet, this convenience comes bundled with certain security risks that are far from negligible.

Just think single point of failure...

Now, to lend you a hand with solving your dilemma, let's go over the key Drupal multisite pros and cons. So that, depending on your:
 

  • developers' skill level
  • current infrastructure 
  • project budget
  • hierarchy of priorities
  • host capabilities
  • multi-site infrastructure's specific needs
     

… you can decide for yourself whether a Drupal multisite setup does suit your situation or you'd better off with one of its valid alternatives.

And whether you agree that it should eventually get removed from Drupal 9.x or not.
 

1. Drawbacks for Using the Multisite Feature/Arguments for Removing It

Now, let us expose this built-in Drupal feature's main limitations. Those that might just make you think twice before using it:

  • there's no way to update the core of just one Drupal website from your setup; you're constrained to update them all at once, every single time
     
  • it becomes quite challenging to assign a team with working on one (or some) of your websites only
     
  • it's not as richly documented as other built-in features (especially if we consider its “age”)
     
  • it exposes your Drupal multisite setup to security vulnerabilities; it's enough for one website from the “cluster” to get corrupted (accidentally or intentionally) for all the other ones to get infected
     
  • reviewing code becomes a major challenge: you can't “get away with” writing code for one website only; instead, you'll need to rewrite code on all your websites included in the setup, to test it against all breakpoints and so on...
     
  • putting together test and state environments gets a bit more cumbersome
     
  • in order to efficiently manage such an infrastructure of websites strong technical skills are required; are there any command-line experts in your team?
     
  • having a single codebase for all your Drupal websites works fine if and only if they all use the same settings, same modules; if not, things get a bit... chaotic when, for instance, there's a security issue with one module, used on all your websites, that affects your entire ecosystem
     
  • also, since your hypothetical shared database is made of a wide range of tables when you need to migrate one site only, you'll have... “the time of your life” trying to identify those tables that belong to some websites and those that they all share

2. Top 3 Reasons to Go With a Drupal Multisite Setup

Now that we've taken stock of the main drawbacks for leveraging this Drupal feature, let's try to identify the main reasons for still using it:
 

  1. A heavy-weighing reason is given by the time and money you'd save on updating your “cluster” of sites. With the right experience in using the command-line you can run the due updates in just one codebase and have them run across all your websites simultaneously
     
  2. It's an approach that becomes particularly convenient if you need self-hosting for your setup (e.g. take the case of a university hosting all its different websites or a Drupal distribution provider...)
     
  3. You'd be using less memory for OpCache and this benefit becomes particularly tempting if you're dealing with RAM constraints on your servers
     

3. In Conclusion...

There still are solid reasons to opt for a Drupal multisite setup. Reasons that could easily turn into strong arguments for not having it removed in Drupal 9.x...

But there are also equally strong reasons for getting discouraged by the idea of leveraging this age-old feature. Where do you add that from Docker to Composer and GIT, you're not running out of options for managing your “cluster” of websites.

In the end, the decision depends on your situation, that's made of specific factors like budget, hosting capabilities, whether your websites are using the same modules, etc.

The answer to your “Are there any valid reasons for using the Drupal multisite feature?” cannot be but:
 

“Yes there are, but counterbalanced by certain disadvantages to consider.”

Image by Arek Socha from Pixabay

Apr 02 2019
Apr 02

Amazee Labs webinars allow us to share our knowledge and experience with the community. Last week we discussed the challenges in choosing the right CSS-in-JS solution and the advantages of using CSS modules.

After a couple of years of building decoupled sites, the Amazee Labs team has tried several different CSS-in-JS solutions and found this one to be best suited to the needs of our development team.

CSS Modules is a mature project with a syntax that is a superset of CSS, similar to Sass. It makes it easy for you to “think in components” without having to worry about BEM class naming. It automatically generates locally-scoped CSS class names, so you can use “.wrapper” in multiple files without conflict.

It also allows integration of “global” class names from other code (like JS libraries or 3rd party CSS). With CSS Modules you get automatic dead-code elimination as only the CSS used on the page is ever sent to the browsers. Best of all CSS Modules can be used with any JavaScript framework, including React, Angular and Vue.js.

Watch the webinar recording online to learn about:

  • Components without BEM

  • Locally-scoped class names

  • Dead-code elimination

  • Multi-platform support

  • Nested rulesets

  • Cross-component composition

  • Sharing variables between your JavaScript and your CSS


Catch up on our previous webinars here:

Sharing knowledge and learnings is a key value at Amazee Labs. Keep an eye out for future webinars here!

Apr 02 2019
Apr 02

For many businesses, eCommerce is an increasingly important trend. With the potential for frictionless and simple exchanges, doing business on the web has never been more appealing to customers. To make eCommerce work for your business, though, our site needs to be robust enough to handle every step of every transaction.


drupal decouples commerce

Photo by Igor Miske

With its open source community and robust core, the Drupal platform is a worthy solution for any and all eCommerce needs. While a site built on Drupal will have the enterprise capabilities needed to handle a wide array of business needs, one of the major selling points of the platform is its flexibility. Drupal has the ability to be decoupled, allowing you to use Drupal as your background CMS while utilizing a third-party solution for your front end. If your brand is tied to an existing look or design, a decoupled or “headless” solution will let you have your cake and eat it, too.

As you might imagine, headless solutions have major implications for the possibilities of eCommerce platforms. At this year’s MidCamp, an annual meetup of the Drupal community in the Chicago area, Commerce Guys Product Lead Matt Glaman took a look at the future of headless commerce solutions. What emerged from the session painted a picture of progress, as developers are continuing to create solutions in Drupal that will power a wide array of eCommerce solutions.

Decoupled Drupal Facts & Myths

Leading the path toward making headless commerce more feasible is Commerce Guys, a Drupal software firm that held a couple of events at MidCamp on the subject. Best known for creating the Drupal Commerce module and the shopping-cart software Ubercart before it, Commerce Guys brought years of experience in the eCommerce space to this year’s event.

Offering a look at the technical side of Drupal-based eCommerce, Glaman examined the various challenges and solutions surrounding headless commerce. As he explained, the heavily structured data model of eCommerce in general presents a challenge, as do the relationships between the various layers in a site, such as the data and presentation layers. Making sure that the connections between the Drupal-backend and the myriad of frontend possibilities are robust and stable is essential to making headless commerce solutions work for users.

To address these challenges, Glaman highlighted a variety of existing API solutions. To enable headless commerce, the available options include the GraphQL data query language, JSON-RPC, JSON API and the RESTful Web Services spec. Though the latter two have the convenience factor of being included in Drupal Core, all of these solutions have their strengths and weaknesses in the context of headless commerce.

The efficacy of each solution varies depending on which stage of the eCommerce process you look at. For catalogues and product display pages, using the JSON API offers strong query capabilities but doesn’t support mixed-bundle collections. A GraphQL solution, on the other hand, supports cross-bundling but queries can become very large. Most existing solutions also struggle with “add to cart forms,” as it can be challenging to create reusable solutions. Overcoming that hurdle requires using solutions in concert, such as a GatsbyJS:React and GraphQL build. Finally, for carts, the JSON API requires integration with a Cart API, while the aforementioned query issues with GraphQL require the use of GraphQLMutation plugins to solve. For a deeper look into the nuances of these existing headless commerce solutions, you can watch Glaman’s full presentation here:

[embedded content]

So, while headless commerce in Drupal is achievable, it takes some effort to get to a point where it can handle your business’ eCommerce needs. Luckily, the strength of Drupal’s open source community means that you don’t have to wait for the Commerce Guys to fix everything. A firm like Duo can help you fine-tune an eCommerce solution to fit your needs. Don’t just take our word for it; our previous work with the Chicago Botanic Garden demonstrates our prowess in designing unique and robust eCommerce platforms.

If you’re interested in exploring a headless commerce platform on Drupal or are just looking to make your online business run more smoothly, Duo is the right partner for you.

Explore Duo

Apr 02 2019
Apr 02

by David Snopek on April 2, 2019 - 11:59am

When we originally announced that we'd be providing Drupal 6 Long-Term Support, we committed to supporting our customers until at least February 2017.

Each year in the spring, we've taken a look at the state of Drupal 6 and decided whether we'll extend support for another year, and if we need to make any changes to our offering. Here's the articles from 20162017, and 2018 where we announced an additional year each time and any new concerns (for example, PHP 7 support).

Today, we're announcing that we'll be extending our Drupal 6 Long-Term Support two more years until at least February 2022!

I'm sure there will come a time, when it no longer makes business sense to pour resources into Drupal 6 for the few remaining sites, however, it's already clear to us that there's enough demand for a couple more years.

Also, now that we know when Drupal 7 will reach it's End-of-Life, we've started to plan for that, and decided that we'd like D6LTS to last at least until then (which is why we're announcing an additional 2 years this time, rather than just 1).

Regarding Drupal 7: we've officially applied to be a Drupal 7 Extended Support vendor and have been accepted. :-)

Read on to find out more!

Why February 24th, 2022?

Well, we've been using the February 24th date, because Drupal 6 orginally reached it's End-of-Life on February 24th, 2016, and we've been taking it one year at a time.

We recently learned that Drupal 7's End-of-Life will be in November 2021.

We want Drupal 6 LTS to continue at least through Drupal 7's community support so... the following February 24th it is!

Since we've been able to extend so many times it's entirely possible we'll extend it again past 2022, but no promises at this point. (See the last section in this article for our reasons why...)

There's still TONS to do Drupal 6

While it can be a little hard to predict the challenges that Drupal 6 site owners will face in the future, don't worry - I'm sure there will be plenty to do. :-)

This past year included a focus on adding PHP 7.2 support. Given that 7.2 will reach it's End-of-Life in November 2020, we'll certainly be adding support for PHP 7.3 and 7.4.

Also, hosting providers and Linux distributions have started bundling MySQL 8, which also requires updates to Drupal 6 core and contrib modules. A community contributor has been leading the effort on that.

And, of course, we'll continue doing security releases and the occasional bug fix. :-)

A couple months ago, I wrote that we'd made 99 D6LTS releases up until that point - we're already up to 113.

In fact, rather than dwindling away with time, there's more to do for Drupal 6 than ever!

What are your plans for Drupal 7?

Recently, things have been coming into focus for Drupal 7's End-of-Life and the Drupal 7 Extended Support (D7ES) program.

Compared with the Drupal 6 Long-Term Support (D6LTS) program, there's a lot more information that's known further in advance, as well as quite a few more rules and requirements for vendors looking to participate in the program.

You can see the full details of the D7ES program in this PSA from the Drupal Security Team.

We still don't know the full details of our offering, but we can say this:

  • It will be very similar to our D6LTS offer
  • We'll be providing Drupal 7 support until at least November 2024, because vendors are required to participate for at least 3 years

More details will come as we get closer to Drupal 7's EOL!

Will you support Drupal 6 forever?

While some of customers would love if we'd support Drupal 6 forever, the answer is "no."

Our service is billed at a relatively low fixed monthly fee, so it depends on a certain amount of scale and overlap between our customers needs in order to be profitable.

This is great for our customers because they pay less than they'd probably pay hourly for individual services just for their site, by "sharing the load" with other customers with similar needs! But it also means that when enough of our customers quit or upgrade to being Drupal 7 or 8 maintenance and support customers, providing Drupal 6 LTS will be a loss for us.

When that happens (and it inevitably will), then we'll have to either (a) charge higher prices to make up the difference or (b) stop providing Drupal 6 LTS.

But don't worry - we'll let you know long in advance of when that is coming!

In the spring of 2021, we'll be announcing any changes to our Drupal 6 LTS offering, including:

  • Whether or not we'll be extending Drupal 6 support,
  • If there will be any changes to the price or service offered,
  • And if we have any special offers to help upgrade the remaining Drupal 6 sites

But for the time-being, you can expect our Drupal 6 LTS to last until February 24th, 2022!

Apr 02 2019
Apr 02

Article 13: Copyright re-invented

The European Union has not updated the copyright laws since 2001. Now they are aiming to change that and bring the copyright laws in line with the “digital era”. Most of these changes are uncontroversial, however, Article 13 will have a huge impact on the way that content is shared on the internet. What it basically means is that, hosting platforms will be responsible to make sure that the content that is uploaded is going to be in line with the copyright laws.

How Article 13 shifts the balance of power for creators and publishers

The goal of article 13 is to fix the problem of value distribution amongst a certain set of industries, especially the music industry. The problems with the Article 13 is with the services towards which it is addressed, while also suffering from having a broad yet vague goal. Problem is that it will apply to all types of copyrighted works. On top of that, there is no reason for an article that is intended to strengthen the bargaining power of the music industry to impose costly responsibilities on platforms that have nothing to do with sharing music. Additionally, since the article seems so vague, there are bound to be misunderstandings and misinterpretations which will lead to the need of taking legal action for the matter to be settled.

Buckle up for the consequences of Article 13

So how are hosting platforms going to tackle this new challenge? Basically, human reviewing is going to be out of the question. The reason for this is that consistently monitoring huge amounts of data that is being uploaded in a timely manner is virtually impossible, unless you have a small army at your disposal. What this means, is that platform will have to put automated filters in place in the forms of BOTs or AI. Ok, so where is the problem?

Big corporations win, small companies lose

One of the problems is that a system like this will be extremely expensive to adopt. What this means is that smaller platforms will not be able to adopt such a system and might be forced to opt out of the game altogether. Basically, this will stifle the emergence of innovation in the EU, brought by new small competitors on the market. On top of that, already established giants in the tech industry will be able to afford such a system, meaning that they will be able to hold even more power.

Another problem with this approach, is that an AI or BOT is not going to be able to tell the difference between truly copyrighted content and content that is meant for humour.

Is this goodbye to the meme culture?

What this means is that if a funny picture is based on a scene from a movie, the filtering system will regard this as copyrighted content and remove it from the internet.

Although the EU has made it clear that the exceptions to the rule will be content that is meant to be a “quotation, criticism, review, caricature, parody or pastiche”, the problem with how these contents will be told apart from real copyright infringements by filtering systems still remains the same.

“There is a module for that”

With Drupal being a free open-source CMS there is a chance for the Drupal community to be able to shine and bring an advantage to the game. By developing a free filtering module, Drupal based websites will have a clear advantage over the competition. This will rebalance the power between the tech giants and small companies, as instead of having to pay for the software developed by Google or Facebook for example, the companies will have it for free provided by Drupal. With this, small companies will have an extra incentive to adopt or migrate to Drupal. In this case, Drupal is going to be their knight in shining armor.

Who is exempt from Article 13

In an attempt to not completely destroy the start-up ecosystem, the EU has put a couple of “mitigation measures” in place that platforms have to adopt in order to not be liable for the unauthorised content that the users are uploading.

These “mitigation measures” are as follows:

  1. All platforms must make “best efforts” to license copyrighted works uploaded by their users. A lot of this will basically fall upon how “best efforts” will be interpreted. This rather vague term is troublesome, since having to pay so many licenses will basically be impossible for smaller players in the field.

  2. In addition to this, all platforms will have to make “best efforts to take down works upon notice from rightsholders”. There is nothing new here since this was already an obligation that platform already had under the E-Commerce Directive.

  3. Additionally, all the platforms with 5 million monthly users will have to make sure that the removed copyrighted content will remain removed. What this means is that these platforms will have to bring filters to the game, in order to prevent the re-upload of the content.

  4. Lastly, all platforms that are older than 3 years and have more than 10 million yearly revenue will have to make “best efforts to ensure the unavailability of specific works for which the rightsholders have provided the service providers with the relevant and necessary information.''

 

In light of these exceptions to the rule, only a small amount of platforms and companies will not be held accountable, for a short period of time,  for the unauthorised content that is uploaded to their website.

Articles' 13 little brother, Article 11

Besides Article 13, another article was approved, Article 11. This one is a little easier to digest. Article 11 aims to target news aggregators like Google or Apple, who use AI-driven algorithms to find the most important news of the day.  Basically it helps news outlets to generate more money for the content they create, by imposing a tax on the snippet of information that is shared on the search engines or on social media. Now, news outlets will be able to charge Facebook a tax for sharing the snippet of information with the audience. This may lead to a decrease in the amount of news you see shared on social media, since now, it is going to be more expensive to do so. This might affect smaller news publishers to grow a bigger audience, because it is going to be harder to gain exposure. However, only time will tell the outcome of this directive.

How will these changes affect you as a site owner?

Basically, website owners will have to think carefully about the content that is present on their website. This will mean that when you are hosting a website, you have to make sure that you already have a license for the content that you share on there. Also, when embedding a video or sharing a snippet of information from a blog on your website, you are going to have to be sure that you have the copyrights in place, while also having to mind the tax for sharing the blog information. If this is not the case, then you either have to take down the unapproved content or you might be liable to face legal action. Either way, the consequences for breaching the copyright law are yet to be defined.

How will you be affected as a regular user?

In the case that you are a designer, musician, photographer, blogger or any other profession that creates content which can be published online, you will be entitled to copyright your creations. After that, you will be able to track down however is sharing your content without your permission and either ask for some kind of compensation or ask for your content to be removed from their website. If the other party does not comply, then you will be eligible to take legal action. This is how it has always been. However, you might have trouble when trying to upload content on the internet since the filters might regard your content as copyright, even though it is not intended to be so, effectively affecting the content that can be posted and also consumed by the people from Europe.

There is still time to adapt

Keep in mind that the Article 13 and Article 11 are basically European directives. Meaning that every member state will have 2 years, after the decision was taken, at their disposal to be able to interpret and adopt the laws as they see fit. What this means is that website owners will have plenty of time to make adjustments to their websites in order to adapt to the new directive.

Apr 02 2019
Apr 02
Completed Drupal site or project URL: https://www.mass.gov/

The Commonwealth of Massachusetts embarked on a large-scale web replatform to modernize and make it easier to continuously improve the way the Commonwealth engages with and provides services to its constituents online. By improving the user experience for its constituents, and providing, as comprehensively as possible, a “single face of government”, Mass.gov® has become an essential platform in the Commonwealth’s ability to serve its constituents by delivering information and critical day-to-day social services to the Commonwealth’s 6.8 million people. With 15 million page views each month, the Mass.gov platform hosts over 400+ government agencies and content to support anyone who wants to move, visit, or do business in the state. All of which is fueled by the Commonwealth’s 600+ content authors, who have been able to use the platform to improve constituent satisfaction, making a noticeable positive difference on how the public sees their organization and the services they provide.

As early adopters of Drupal 8, Mass.gov and its ecosystem of web properties are the primary platforms for the delivery of government services and information in the Commonwealth of Massachusetts.

Apr 02 2019
Apr 02

DrupalCon Seattle Gold Sponsors

Commerce Guys is joining forces with some of our Technology Partners and a variety of contributors to promote Drupal Commerce at DrupalCon Seattle from April 9-11, 2019.

We're excited to introduce the Drupal community to Centarro Toolbox, a collection of SaaS products and support packages that help Drupal Commerce teams build with confidence. First revealed at MidCamp last month, we can't wait to show it off to the community at large while also connecting about all things Commerce 2.x.

Come demo Centarro Toolbox (and grab some sweet swag)

There's a lot to see inside Centarro Toolbox!

It includes three of our own SaaS tools designed to complement any Drupal Commerce site. They provide update automation, code quality monitoring, and a sales and analytics dashboard that delivers key insights to merchants. We've also bundled in offers from our partners at Avalara, Human Presence, and Lockr, and we'd love to share all about them.

We'll be handing out some exclusive swag this year, including our first pressing of custom coins ... no clue why it took us a decade to think that up! We're also stocking the booth with some rad sweatbands to keep your brows clean at the after parties Dave Grohl style. Finally, visitors to the booth can enter to win:

  • 1 of 3 copies of Preston So's book, Decoupled Drupal in Practice (winner chosen at random each day)
  • An Xbox One courtesy of PayPal (winner chosen at random Thursday afternoon)

Drupal Commerce in the spotlight

There's a lot to be said about how Drupal Commerce is making merchant and agency teams more productive, and you don't just have to take our word for it. Add the following sessions to your schedule to learn more:

Last but not least, if you've made it this far, chances are you're really into what we're doing with Drupal Commerce. If that's you, we'd like to invite you to an exclusive reception at Avalara's HQ. We'll enjoy food and beverages from the 18th floor looking out over downtown Seattle. The party will be from 6:30 - 8:30 PM Wednesday, but space is limited! Reach out in advance or find us early at the show to reserve your spot.

Schedule Time to Meet

If you're heading to DrupalCon, we'd love to chat about Drupal Commerce with you. Use our meeting request form to get on our calendar to discuss a particular project or need, or subscribe to our newsletter to be kept in the loop more generally.

Apr 02 2019
Apr 02
Building a Decoupled LMS for Estee LauderCompleted Drupal site or project URL: https://elx3.myelx.com/

* The given URL is gated, as per the client's request.

Estee Lauder is a global leader in prestige beauty — delighting its consumers with its transformative products and experiences, inspiring them to express their individual beauty. It takes pride in focussing solely on prestige makeup and beauty care with a diverse portfolio of 25+ brands distributed globally through eCommerce channels and retail outlets sold in 150 countries.

They aim to transform the beauty industry landscape with their customer service by providing modern training and understanding of the products with (digital solutions) to its teams.

Srijan worked closely to design an open-source, multilingual, decoupled learning platform where beauty advisors could consume a vast set of learning resources.

Here’s how we helped them reduce 30% cost in classroom training while also able to track ROI from the learning and training initiatives.

Apr 01 2019
Apr 01

For those who don’t work in the trenches of digital accessibility, the guidelines can seem confusing or overwhelming. The fact is, it’s not necessary to know the details associated with the 73 individual Web Content Accessibility Guidelines (WCAG) 2.1 success criterion in order to make design decisions and ultimately contribute to a website planning session. 

Options for Learning About WCAG 2.1

Before we share the seven perspectives, we want to acknowledge that the World Wide Consortium has organized the WCAG criterion into four principles. You can read the principles, their guidelines; however, it might be hard to imagine their true impact without further investigation into the criterion. 

The four WCAG principles are:

  • Perceptionable - “Provide text alternatives for any non-text content so that it can be changed into other forms people need, such as large print, braille, speech, symbols or simpler language.”1
  • Operable - “User interface components and navigation must be operable.”1
  • Understandable - “Information and the operation of user interface must be understandable.”1
  • Robust - “Content must be robust enough that it can be interpreted by a wide variety of user agents, including assistive technologies.”1

Another way to shuffle the criterion deck is with the following proposed perspectives:

  1. Physically Interacting with a Web Page;
  2. Understanding the Context of a Web Page
  3. Seeing the Text on a Web Page;
  4. Understanding the Text on a Web Page;
  5. Interpreting Media via the Web;
  6. Controlling Media on a Web Page; and
  7. Using Forms on the Web Page.

1. Physically Interacting with a Web Page

When you visit a web page, you scroll or swipe to move the page through the browser window. When you find a link you want to follow, you click or tap. If you couldn’t do either, what would you do? Perhaps you would:

  • Use keyboard keys,
  • Use mouth sticks, or
  • Use voice recognition.

In order to make these other options possible, you need to give users the ability to ”touch” the page and for their assistive technology to understand your code. For example, you need to:

  • Enable a visitor to skip to the most important aspects of the page right from the start;
  • Plan the path that the tab key on the keyboard will follow;
  • Ensure a visitor knows where they tab has landed;
  • Provide alternatives to common mobile finger gestures such as zoom; and
  • Allow site visitors to choose their own input mechanisms.

One way to imagine these concepts is to picture yourself in a maze. You can only “see” what is in front of you. Where do you go? Once you learn the path to the center of the maze, you can visit it anytime. However, what would happen if someone changed the layout of the maze from day-to-day? You would have to start  over, learn the path to the center again and again.

Consistent page layouts provide consistent in-page navigation. Sometimes layout creativity from page-to-page can cause issues versus offering visitors a new experience.

2. Understanding the Context of a Web Page

With eyes that see, you can discern information from a web page without consciously thinking about it. You take in the layout (e.g, portrait or landscape) of the page and quickly identify the:

  • Header and information about the site owner;
  • Title of the page and if it is a topic landing page or one of your many articles;
  • Menus that provide insight into the information architecture of the site; 
  • Search functionality as well as other ways to find the content you need;
  • Sidebars that contain various items such as lists with purposeful links; and 
  • The main content and its article, event, etc.

When you scroll down the page, you might find that the content is organized into subtopics, allowing you to quickly discern whether the blog post will be of interest.

Then, as you dive deeper into the site, past the homepage, to a landing page, to a bit of content, there might be hints to remind you where you are. With a well-planned site, you will see consistent layouts, allowing you to anticipate which section of the page contains objects with which you want to engage.

WCAG 2.1 provides guidelines and criterion designed to  deliver a similar experience for those who can’t rely on eyesight to navigate a page.

3. Seeing the Text on a Web Page

There are two common visual challenges to which many of us can relate: the need for glasses and color blindness. Even with corrective lenses there might be times when they aren’t enough.  Or, even if you aren’t color blind, there might be times when you cannot read some text sitting on top of an image or color background. 

There are WCAG 2.1 criterion focused on helping you enable visitors to make adjustments to your text so that they can see it. As for color (images or styled), the objective is to ensure enough contrast. 

A contrast example that is easily overlooked is that of links. Blue link works well on a white background; however, that color might not work when it appears on top of a block of color. So, some planning will have to take place regarding the styling of links across the multiple sections of your page or else, some of your links might visually disappear.

4. Understanding the Text on a Web Page

There are two perspectives to the idea of understandable text: the meaning of words and interpreting meaning from the way it is displayed. 

Interpreting the Meaning of Words

We don’t need WCAG 2.1 to motivate us to write well. However, on occasion, even the best writers forget some of the basics such as: 

  • Using unusual words (e.g., regional terms better understood by the locals);
  • Abbreviating without providing insight as to the meaning of the abbreviation;
  • Forgetting the reading level of your audience (e.g., writing to subject matter expert versus someone less schooled);

Then, there are writing aspects that are easy to overlook such as the use of heteronyms. The one that often gives me pause is lead. 

  • Lead, pronounced LEED, means to guide. 
  • Lead, pronounced LED, means a metallic element.

It’s also important to, declare the language of your content. Even if you drop a foreign language into the middle of your content, you need to let the reader know.

Interpreting Meaning from the Display 

What if you couldn’t see visual cues concerning relationships among headers and sub-sections? How would you know there were parent/child relationships?

That’s where header elements versus <strong>, come into play.

Visual and/or auditory cues are just one way to convey meaning. Often times, the order in which content is presented carries meaning. Imagine you have two blocks of content: step one and step two. They appear next to each other in order. 

Here’s the test. Strip away the styling so that they do not appear next to each other. Are they still in the right order? Make sure they are.

5. Interpreting Media via the Web

Images and video are probably the first types of media to come to mind when media is mentioned. There are also shapes (e.g, CSS-based checkmark), animations, and audio files.

Bottomline when it comes to non-text items: anything that is not text needs to be explained or declared as decorative. 

Images

An image might be worth a thousand words, but you can no longer rely on images to convey meaning. If you are using the image to convey 500 out of the 1,000 words, include the 500 words on the page. You have two strategies: a separate “long description” block of text or simply include the information in your narrative.

Video

As for video, you need closed captions at a minimum, be it pre-recorded or live broadcast. Since you have the text, include it in a transcript. Here is an example of an accessible video experience from the World Wide Web Consortium.

Audio

If you upload a podcast to your site, you need to include a transcript. In both video and audio transcripts, you need to include speaker identification if there is more than one, as well as any sounds that convey meaning.

6. Controlling Media on a Web Page

In addition to video and audio player controls, you need accessible controls for items that blink, scroll, or move. There are several “if this, then that” conditions governing where, when, and how controls need to be implemented. 

If you have plans for anything with sound or movement on your page, you need to spend some time to ensure your page is accessible.

7. Using Forms on Web Pages

When you visit a page, it’s likely that the first form you see is a search field. Add a contact form and/or a newsletter signup form, and there is a chance that multiple accessibility issues are lurking.

There are three considerations when adding forms to your site:

  • Labels
  • Instructions
  • Error management

When a label can be ambiguous, instructions can clarify. However, mistakes happen, so your forms need to help users correct their data entry. There are some behind the scenes coding requirements that make these perspectives accessible, but these considerations will get you started.

Conclusion

Making a website accessible isn’t just about code. It calls for a high degree of awareness each time content is added. Maintenance and auditing processes are essential to ensuring accessibility. Need further explanations or insights into accessibility? Contact us today.

Apr 01 2019
Apr 01

Helping content creators make data-driven decisions with custom data dashboards

Go to the profile of Greg Desrosiers

Aug 20, 2018

Our analytics dashboards help Mass.gov content authors make data-driven decisions to improve their content. All content has a purpose, and these tools help make sure each page on Mass.gov fulfills its purpose.

Before the dashboards were developed, performance data was scattered among multiple tools and databases, including Google Analytics, Siteimprove, and Superset. These required additional logins, permissions, and advanced understanding of how to interpret what you were seeing. Our dashboards take all of this data and compile it into something that’s focused and easy to understand.

We made the decision to embed dashboards directly into our content management system (CMS), so authors can simply click a tab when they’re editing content.

GIF showing how a content author navigates to the analytics dashboard in the Mass.gov CMS.

How we got here

The content performance team spent more than 8 months diving into web data and analytics to develop and test data-driven indicators. Over the testing period, we looked at a dozen different indicators, from pageviews and exit rates to scroll-depth and reading grade levels. We tested as many potential indicators as we could to see what was most useful. Fortunately, our data team helped us content folks through the process and provided valuable insight.

Love data? Check out our 2017 data and machine learning recap.

We chose a sample set of more than 100 of the most visited pages on Mass.gov. We made predictions about what certain indicators said about performance, and then made content changes to see how it impacted data related to each indicator.

We reached out to 5 partner agencies to help us validate the indicators we thought would be effective. These partners worked to implement our suggestions and we monitored how these changes affected the indicators. This led us to discover the nuances of creating a custom, yet scalable, scoring system.

Line chart showing test results validating user feedback data as a performance indicator.

For example, we learned that a number of indicators we were testing behaved differently depending on the type of page we were analyzing. It’s easy to tell if somebody completed the desired action on a transactional page by tracking their click to an off-site application. It’s much more difficult to know if a user got the information they were looking for when there’s no action to take. This is why we’re planning to continually explore, iterate on, and test indicators until we find the right recipe.

How the dashboards work

Using the strategies developed with our partners, we watched, and over time, saw the metrics move. At that point, we knew we had a formula that would work.

We rolled indicators up into 4 simple categories:

  • Findability — Is it easy for users to find a page?
  • Outcomes — If the page is transactional, are users taking the intended action? If the page is focused on directing users to other pages, are they following the right links?
  • Content quality — Does the page have any broken links? Is the content written at an appropriate reading level?
  • User satisfaction — How many people didn’t find what they were looking for?
Screenshot of dashboard results as they appear in the Mass.gov CMS.

Each category receives a score on a scale of 0–4. These scores are then averaged to produce an overall score. Scoring a 4 means a page is checking all the boxes and performing as expected, while a 0 means there are some improvements to be made to increase the page’s overall performance.

All dashboards include general recommendations on how authors can improve pages by category. If these suggestions aren’t enough to produce the boost they were looking for, authors can meet with a content strategist from Digital Services to dive deeper into their content and create a more nuanced strategy.

GIF showing how a user navigates to the “Improve Your Content” tab in a Mass.gov analytics dashboard.

Looking ahead

We realize we can’t totally measure everything through quantitative data, so these scores aren’t the be-all, end-all when it comes to measuring content performance. We’re a long way off from automating the work a good editor or content strategist can do.

Also, it’s important to note these dashboards are still in the beta phase. We’re fortunate to work with partner organizations who understand the bumps in the proverbial development road. There are bugs to work out and usability enhancements to make. As we learn more, we’ll continue to refine them. We plan to add dashboards to more content types each quarter, eventually offering a dashboard and specific recommendations for the 20+ content types in our CMS.

Apr 01 2019
Apr 01

Vienna, VA March 19, 2019—Mobomo,

Mobomo, LLC is pleased to announce our award as a prime contractor on the $25M Department of Interior (DOI) Drupal Developer Support Services BPA . Mobomo brings an experienced and extensive Drupal Federal practice team to DOI.  Our team has launched a large number of award winning federal websites in both Drupal 7 and Drupal 8, to include www.nasa.gov, www.usgs.gov, and www.fisheries.noaa.gov.,These sites have won industry recognition and awards including the 2014, 2016, 2017 and 2018 Webby Award; two 2017 Innovate IT awards; and the 2018 MUSE Creative Award and the Acquia 2018 Public Sector Engage award.

DOI has been shifting its websites from an array of Content Management System (CMS) and non-CMS-based solutions to a set of single-architecture, cloud-hosted Drupal solutions. In doing so, DOI requires Drupal support for hundreds of websites that are viewed by hundreds of thousands of visitors each year, including its parent website, www.doi.gov, managed by the Office of the Secretary. Other properties include websites and resources provided by its bureaus  (Bureau of Indian Affairs, Bureau of Land Management, Bureau of Ocean Energy Management, Bureau of Reclamation, Bureau of Safety and Environmental Enforcement, National Park Service, Office of Surface Mining Reclamation and Enforcement, U.S. Fish and Wildlife Service, U.S. Geological Survey) and many field offices.

This BPA provides that support. The period of performance for this BPA is five years and it’s available agency-wide and to all bureaus as a vehicle for obtaining Drupal development, migration, information architecture, digital strategy, and support services. Work under this BPA will be hosted in DOI’s OpenCloud infrastructure, which was designed for supporting the Drupal platform.

Apr 01 2019
Apr 01

Don’t miss the DrupalCon Seattle session, “Hot Dog/Not Hot Dog: Artificial Intelligence with Drupal” by Rob Loach, Director of Technology at Kalamuna.

When Success Becomes a Problem

As user-generated content platforms such as forums, classified listings, and recipe sites become increasingly popular, site administrators can quickly become overwhelmed by the volume of content to be moderated. While a lot of automated spam and other inappropriate content can be screened out by CAPTCHA tests, content submitted in violation of site policy must typically be moderated by humans. 

Whether the user-contributed content is reviewed by a person prior to publishing or reviewed after the fact matters significantly. Allowing users to post directly to a site risks damaging your brand’s reputation if users upload inappropriate content. Without moderation prior to publishing, there is no guarantee malicious content will be identified before offending other users.

For the most part, this sort of human moderation of content is feasible when a site is relatively small. Once an online platform finds its audience and grows larger however, it becomes increasingly unsustainable to try scaling human moderators to manage the user-contributed content.

Fortunately, where these moderation tasks become repetitive and systematic, computers and artificial intelligence may begin to provide solutions to the limits of scaling human beings.

An Intriguing Proposal

When Josh Koenig, Co-Founder & Head of Product at Pantheon approached Kalamuna CEO Andrew Mallis about integrating Google Cloud Vision with Drupal, we were intrigued. As a team of strategists, designers, and developers working on the web, we are acutely aware of the burdens that come with website management and administration. We constantly seek new ways to simplify and streamline the systems we build for our clients’ editorial staffs, especially when it comes to problems of scale. Artificial intelligence and machine learning offer so much potential in this regard, we were very eager to find a way to harness this technology within Drupal.

Initially, we were interested in exploring machine learning to automatically update content archives on large websites where ALT text was missing. This seemed like an opportunity where AI could perform large-scale, repetitive tasks that may be implausible for a person to perform. Unfortunately, we found the technology was not sufficiently reliable for this application yet. 

Enter the hyperlocal news portal Patch.com and its CTO Abe Brewster. They are a longtime Pantheon partner with a clear use-case for image analysis and content moderation. Patch.com allows users to author and post local news stories that may be chosen for syndication and distribution across Patch.com’s network of “patches” (i.e. local sites) across the United States. No strangers to the risks of hosting and publishing user-generated content, Patch.com already had a robust content moderation process in place, but it was mostly manual, and relied on site members, the public, and sometimes lawyers, to flag inappropriate or copyright-protected images.

Once we understood the use case, Kalamuna and Pantheon saw the opportunity to create an AI-assisted content moderation tool for the Patch.com editorial team. By applying Kalamuna’s user-centered design practices, we felt confident we could make the editorial experience a seamless combination of Google Cloud’s AI technology and Drupal’s workflow capabilities.

Integrating Drupal and Google Cloud Vision

We wanted to explore where some of the common integration points could be made between Drupal and Google Cloud Vision, so we started on the creation of the Google Cloud Vision Drupal module. By leveraging Drupal 8’s hook system, we were able to put together a proof-of-concept demonstrating some of the API’s capabilities.

To do that we built a demonstration site using the Umami demo profile. This install profile provides a demo version of a foodie website to demonstrate some of the content workflows available in Drupal. 

Screenshot of the Umami homepage which displays recipes and photos of food

Umami provides users the ability to upload recipes, which seemed like a great place to ensure images that users upload don’t contain adult content. Since the images are uploaded through Drupal’s image field, we were able to create a hook to pass them through Google Cloud Vision’s SafeSearch algorithm and get a set of results we could use.

When a user uploads an image that is deemed “Racy” or “Adult” by Google Cloud Vision, it is flagged accordingly.

Animated walkthrough of an image of Michaelangelo's statue of David being uploaded and being denied by the CMS.

We accomplished the image field verification through hook_field_widget_form_alter()...

/**
 * Implements hook_field_widget_form_alter().
 *
 * Adds Google Cloud Vision checks on the file uploads.
 *
 * @see https://api.drupal.org/api/drupal/core%21modules%21field%21field.api.php/function/hook_field_widget_form_alter/8.6.x
 */
function google_cloud_vision_field_widget_form_alter(&$element, FormStateInterface $form_state, $context) {
  $field_definition = $context['items']->getFieldDefinition();
  if (!in_array($field_definition->getType(), ['image'])) {
    return;
  }
  $element['#upload_validators']['google_cloud_vision_validate_file'] = [];
}

/**
 * File Upload Callback; Validates the given File against Google Cloud Vision.
 *
 * @see google_cloud_vision_field_widget_form_alter()
 */
function google_cloud_vision_validate_file(FileInterface $file) {
  $errors = [];

  // Retrieve the file path.
  $filepath = $file->getFileUri();
  $result = google_cloud_vision_image_safesearch($filepath);

  // Check the results.
  if ($result !== TRUE) {
    $errors = $result;
  }

  return $errors;
}

By implementing hook_field_widget_form_alter(), we were able to check whether or not images uploaded to the site had adult content, and deny their use. But what if content editors want moderation control over the images? There could be cases where an image is deemed “Racy”, but is still acceptable for use on the site. For this, we integrated with Drupal 8’s Workflow and Content Moderation modules to give content editors a dashboard in which they could approve or deny uploaded images.

Animated GIF showing a photo of Michaelangelo's statue of David with a green fig leaf over his genitals being uploaded and flagged for moderation.

Making a Secure Open Source Module

One critical part of setting up the module was getting Drupal to communicate with Google Cloud API. We wanted the service account key to be secure and encrypted, so we used Lockr. Lockr integrates with the Drupal Key module to provide a slick API to obtain encrypted off-site keys. This allowed us to host the site on Pantheon and open source the module, with the assurance that the service account key we used would not become public.

Screenshot of the Lockr module configuration panel

To see more about how Lockr works, read Chris Teitzel’s Using Lockr to Secure and Manage API and Encryption Keys.

Overall, Drupal provides content editors great control over how and what content gets published on their site. Pairing it with Google Cloud Vision brings an added dimension, and offloads a lot of the work content editors would have to do to robots. For more information on how to set up the Drupal module, generate a Google Cloud API service account key, and use Google Cloud Vision on your site, see Kalamuna's Google Cloud Vision Drupal module on GitHub.

Lessons from Working with Google Cloud Vision

It wasn’t too long ago that technology like Google Cloud Vision was the stuff of science fiction. As with any cutting edge technology, we did at times hit the bleeding edge of its reliability. Like many intelligent machines, Google Cloud Vision AI has some unexpected quirks where some good ol’ fashioned human intervention is required.

Google Cloud Vision’s API is able to return a wealth of information about an image, identifying logos, faces, landmarks, handwriting, etc. We were specifically interested to tap into Google’s Safe Search Detection API to flag explicit content in any uploaded images. This is the same technology that Google uses on their own Image Search tool, which allows users to toggle “safe search” on or off when searching for images on the Web. 

The Safe Search Detection API is able to flag an image as containing “adult”, “spoof”, “medical”, “violence”, or “racy” content. We decided that we wanted to target “adult” and “racy” content for our MVP, so that any photos containing nudity would be either rejected outright, or flagged for moderation.

So how would we know if an image should be completely blocked from being uploaded, or merely flagged for review by an editor? In addition to trying to guess if one of the above categories applies to a scanned image, the API also returns a secondary value based on likelihood.

Screenshot of the Safesearch parameters, including adult, spoof, medical, violence, and racy.JSON representation of an image’s safe search parameters

While working on this project, we learned that tan leather car seats are notoriously hard for computers to distinguish from human skin. Naturally, we had to test this for ourselves, and indeed, seats ranging in hue from beige to light brown would all come back as “racy” with a likelihood ranging from “very unlikely” all the way up to “very likely”, and some would even get flagged as “adult,” (albeit qualified as “unlikely”.) 

Aware of this flaw, we realized that we would have to incorporate a degree of human moderation for any images flagged for “racy” content, and for “adult” content with an “unlikely” probability. 

Screenshot of the Safe Search response to a photo of tan leather car seats, which was very likely racy.

You can try for yourself on the Google Vision product page.

Planning for a Real-World Publishing Workflow

Throughout the proof-of-concept project we had Patch.com in mind as the end user of this toolset.  After all, if we were going to develop something useful, it made sense to model our solution on a real world use case. So after figuring out how the Google Cloud Vision worked, one of the next things we needed to do was figure out how to incorporate it into a real-world publishing workflow.

Below, you will find the workflow diagram that we designed for the editorial process. When an image is uploaded by a user, it gets processed by Google Cloud Vision (in orange). From there, the image would either be approved, rejected, or flagged for manual moderation. Considering there could be false-positives detected by Google Cloud Vision, we had to introduce a Content Editorial Moderation Dashboard to give editors control over which content is approved or rejected.

Logic flow diagram of a content moderation workflow incorporating Google's VisionAPI

Lessons Learned: Limitations of PHP Uploads and Legal Liability Concerns

An uploaded file saved anywhere on your server – even temporarily in a file buffer – can potentially classify you as a distributor of that content. An image could contain subject matter or content that could make you liable for copyright violation or place you in serious legal jeopardy. 

Drupal saves images uploaded through image fields in the “tmp” temporary directory. Theoretically, JavaScript could encrypt a base64 string of the selected file, and upload that through a POST request. Drupal would then catch the string, decrypt it, and pass the result off to Google Cloud Vision. Being able to process images before they’re saved locally was outside of the scope of this  module, but we learned new lessons in how Drupal and PHP handle file uploads along the way.

Drupal 7 vs Drupal 8

As part of our proof-of-concept, we had put together both a Drupal 7 and Drupal 8 version of the module. Since Patch.com was running on Drupal 7, we had a good use case to test. There were a couple benefits that came with the Drupal 8 version, however.

  • Package Management
    • Drupal 8 integrates very well with the Composer package manager. This meant that we could bring down the Google Cloud Vision PHP package seamlessly through a call to composer install. We managed to do something similar in Drupal 7, but had to run through some hoops to make sure it functioned correctly.
  • Content Moderation
    • The new Workflow and Content Moderation modules in Drupal 8 gave content editors a great interface to approve or deny content publication flagged by SafeSearch.

To Be Continued...

Working on this project was both challenging and a lot of fun, and it definitely got us thinking about all the ways we could incorporate AI and Machine Learning into large-scale Drupal sites, and how we could improve the content moderation experience for sites like Patch.com. Helping people save time so they can focus on the things machines can’t do is fundamental to the work we do every day. 

Want to learn more? 

Rob Loach, Kalamuna’s Director of Technology, will be presenting a session at DrupalCon Seattle about this project: “Hot Dog/Not Hot Dog: Artificial Intelligence with Drupal

Date: 04/11/2019
Time: 9:45 am – 10:15 am
Room: 607 | Level 6

Apr 01 2019
Apr 01

Proceeding every article or blog post about the future of Drupal, there are always a few comments that express a high-level of frustration when it comes to Drupal 8. From missing modules, to completely rewritten APIs, to new design patterns, and even complaints about the restructuring of sessions and tracks at DrupalCon….People are frustrated with change. The entire Drupal community understands peoples’ frustration, which is why we are continually trying to improve the software and community. Our most significant strides happen at DrupalCons, where everyone comes to together to share codes, ideas, and passion.

Introspection before DrupalCon

Before last year’s DrupalCon I wrote a blog post titled, Drupal is the worst Content Management System except for all those other solutions in which I talked about Drupal adoption, sustainability, and mentorship. DrupalCon provides the Drupal community with an opportunity for everyone to take stock of where we are at and where we are going. Every year we have discussions with actionable items to improve our community, which is continually changing.

Each year, I get something different out of DrupalCon. Last year I became more aware of diversity and inclusion issues within the Drupal and software community. I was also inspired to address webform related accessibility issues. The overarching thing I have learned at DrupalCons is contributing to Open Source is more than writing code, it’s about collaboration and community.

Come for the code; stay for the community

What I like about this quote is it acknowledges two key aspects to Drupal - the code and community - with the code being the “what” that brings us together. The code might also be what is most frustrating to people coming and staying with Drupal. But, getting frustrated as a community, is still a connectedness - we’re all invested in the code, together.

Our Lego analogy is broken

There are many articles expounding that Drupal's modularity is like Lego building bricks.

For example, Rob McBryde's post, Drupal vs. WordPress - A Lego Analogy states…

"Drupal is like Lego Technic where you can build whatever you can imagine with little to nothing ​preformed."

We are promoting that Drupal is as flexible and as easy as Lego building bricks which can be endlessly combined to build anything. Omitted from this analogy is that some of Drupal's bricks don't fit together, sometimes they don't work as expected, they might be missing, and sometimes stop working; this is incredibly frustrating.

Imagine getting a box filled with bright colored plastic bricks with no particular directions, and occasionally there are some broken or missing blocks. Once you determine something is missing, you’re supposed to go out and find that missing piece or build it yourself.

Drupal is no longer as simple as a pile of Lego bricks; meanwhile, our projects do require master builders.

Ambitious digital experience with enterprise support

It took me a few years to fully respect that Drupal is for "Ambitious digital experience" when we are competing directly with enterprise content management solutions. Shouldn't we be honest and include the word "enterprise" in our software description? Enterprise companies are relying on Drupal as a key part of their digital strategy and infrastructure. When we look at the gestalt of everything that organizations are doing with Drupal and as a community what were are trying to build - it can’t be denied: it is ambitious.

We need to recognize that for organizations to build their ambition digital experience they need enterprise-level support.

Drupal has succeeded in inspiring Fortune 500 companies to embrace Open Source without a clear roadmap on how to do it properly. These are huge companies with clear missions, and Drupal is a flexible and powerful tool used to help these companies succeed. Companies have no issue paying for support, yet it is unclear in the Drupal community how to get support.

Do you contract an agency? Hire your own team? Contribute code directly back to Drupal? Sponsor a camp? Join the Drupal Association? Help fund the Drupal project?

The answer is all of the above. Providing and receiving enterprise support for Drupal is a multifaceted approach, which is not universally and clearly communicated to organizations. The most immediate example of this problem is Drupal.org's support page is developer centric. We need to promote better support within the Drupal community. It needs to be perceived as a priority by those using it - this perception, will in turn, quite naturally create a greater sense of responsibility and investment to those of us writing code, fixing bugs, and sharing ideas. Our drive to collaborate and make something great, even better is already there - consistent recognition and support will certainly motivate, encourage, and ultimately yield even better results.

Promoting and supporting Drupal

Last year, the Promote Drupal initiative was introduced at DrupalCon Nashville. At DrupalCon Seattle this initiative is going to start proposing, planning, and promoting Drupal. Having sales and marketing materials that everyone can use will help all our pitches and also establish a unified message. I am optimistic that we will be able to clearly communicate the benefits of Drupal and Open Source and persuade organizations to explore our software and community.

Our first step is convincing organizations to enter the open source Drupal waters; then we need to teach them how to swim.

On my blog, I am continually drawn back to exploring the importance of teaching and mentoring in Open Source software development.

The best teachers are not rockstars; they are mentors.

The Drupal rockstar is imaginary

I am hoping that Matthew Tift (mtift) is going to end the myth of the Drupal rockstar with his DrupalCon presentation, The Imaginary Band of Drupal Rock Stars. I was always intimidated by Certified to Rock which ranked a Drupal.org user's contribution on a scale of 1 to 10; I continually got a 3, which was frustrating and discouraging

If the notion that the Drupal rockstar is over, what should we be calling ourselves?

The term 'Drupal expert' may also be a fallacy because the most valuable people in the Drupal community are the contributors, mentors, maintainers, and leaders.

Contribution and mentoring is the key to success

There are two key steps for a person or organization to fully value and understand how to engage in Open Source. First, they need to recognize that the software is built by people freely collaborating by contributing ideas and code which helps benefit the overall project. Second, acknowledging that the open source workflow involves maintainers mentoring contributors.

Mentoring the next generation of Drupal contributors

The maintainer to contributor mentoring relationship is what builds and maintains our software. We are continually mentoring new contributors who become maintainers. Mentorship is what grows our community.

Drupal is about to hit its twenty year anniversary. Our community is getting older and older. Meanwhile, the software industry is growing with younger people filling in the ranks. Drupal is not as appealing as the new and shiny rising technologies. Still, the concept that Drupal could be the backbone content repository to an organization changing digital strategy, is a valuable and long-term vision.

Beside our software, we have a vibrant community; our product powers amazing digital experiences. The fact that our software powers large enterprise companies is a double-edge sword. It is appealing to see one's contribution is being used by and helping recognizable organizations, yet it’s also intimidating to meet the expectations of these organizations. And just as the thrill of meeting those expectations is being realized, the notion of contributing code for free - efforts benefits enterprise organizations, which have money, collides and it dawns on you: something is off here.

Open source contributions come in all shapes and sizes

Open source contributions come in all shapes and sizes, and we do a great job of documenting ways for people to get involved, but we avoid directly asking large organizations to get involved by financially contributing to Drupal. Large organizations are willing to pay for enterprise level support however, their support options are limited. Generally, enterprise-level Drupal support involves a 1-on-1 relationship with a Drupal agency (aka Drupal experts). Drupal is too big for one single agency to solve all the problems and challenges.

When we start collaborating to promote Drupal, are we also going to collaborate to improve the support for Drupal?

Every single pitch which includes Drupal as part of the solution needs to also introduce organizations to how they can get involved and support Drupal.

Compensating contributors

Recently, I worked through a few security issues related to the Webform module. I saw how valuable and important it is to have as many eyeballs as possible helping to review and secure our shared code. It is great that we are now compensating people for finding security issues.

What about paying people to fix security issues, critical bugs, or even feature requests?

Removing the frustration

Let's ask the direct question…

What is frustrating people about Drupal?

People become frustrated when something does not meet their expectations with no immediate and easy way to fix the problem and/or improve the situation.

More than a decade after Jeff Robbins' (jjeff) blog post, titled How Drupal Will Save The World documents the same frustrations and problems which we are still facing today.

"Drupal is what you want. It will solve all of the problems that you're having. But it's really hard to set up. And don't do it wrong or your site won't work at all!"

Jeff also offered some answers to…

What if we had unlimited funds to grow Drupal into its full potential as an indispensable web tool that is simple to use and lets people get control of maniacally complex data?

All of Jeff's answers boil down to collecting and distributing funds to address and solve the challenges and frustrations around Drupal and Open Source.

Funding Open Source using Open Collective

Open Collective is a service which allows Open Source projects to transparently collect and distribute funds. Organizations who back an Open Collective will get a receipt for their financial contributions and be able to see exactly how the collected money is being spent.

Having unlimited funds does not solve any problem unless the money can be properly distributed to the people who are needed to solve the problem. Since there is no such thing as unlimited funds, how funds are spent also needs to be tracked.

Open Collective has the potential of solving the funding problem hindering Drupal and Open Source, by making it possible to distribute and track funds. We need to raise awareness about what is Open Collective, individual Open Collectives, contributing funds to different Open Collectives, and finally experimenting with spending collected funds to improve our software and community.

Most of my blog posts end with a call to action to back the Webform module's and any Drupal related Open Collective. Yes, we need your funds and right now, we need your ideas. If you want to share your ideas or start your own Open Collective, please consider attending the Open Collective BOF at DrupalCon Seattle on April 10th, 2019 at 17:30.

Final thoughts

Drupal is no longer a box of colorful Lego bricks being used by rockstars to build websites.

Drupal is used to build ambitious digital experiences by a community made up of maintainers, mentors, contributors, and leaders who collaborate to build a powerful and flexible ecosystem of ideas that evolve with the continuously changing digital landscape of the world.

The challenge is supporting and sustaining our collaboration.

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OKSubscriptions powered by Strikingly

Apr 01 2019
Apr 01
rapid rollout of brand websites for a leading publishing conglomerate.Completed Drupal site or project URL: https://www.autonews.com/

The Client is one of the largest privately owned business media companies with 55 leading business, trade and consumer brands in North America, Europe and Asia.

Key Highlights:

  • Rapid brand rollouts
  • Engineering team’s time is freed up - the first brand was able to set up around 50 landing pages without needing any developer intervention
  • All brands share best practices across analytics, SEO (meta tags, etc.), performance (Drupal Cache, Varnish, Cloudflare), media management etc as these are part of the Core platform
Apr 01 2019
Apr 01
rapid rollout of brand websites for a leading publishing conglomerate.Completed Drupal site or project URL: https://www.autonews.com/

Crain Communications is one of the largest privately owned media companies with more than 50 leading business, trade and consumer brands in North America, Europe and Asia. It boasts of delivering news to more than 6 million global business leaders.

Managing a number of websites, proved to be a challenge since each brand was operating on its own web platform. The new digital initiative was to ensure brand consistency and operational efficiency.

Providing a central governance model, it would help efficiently manage the different web properties while also lowering down the cost.

Srijan rolled out 15+ websites onto the new platform from the various framework without compromising on the best practices across SEO, performance, and media management.

The central digital publishing platform was built on Drupal 8, reducing the development time by half compared to an earlier estimate (if websites were done independently).

Here’s how we did it:

Apr 01 2019
Apr 01

Get ready for your trip to Seattle with this April Fool's Day special episode of the DrupalEasy Podcast.

Drupal Fools

a Dr. Seuss joint (sortof)

From the top of your head to the tips of your toes,
You are a Drupaller, everyone knows.

You think about Drupal all day and all night.
You’re more than obsessed with getting things right.

Your Views and your Fields and your Content Types,
Simply scream “Drupal!” here are some reasons why:

Your User Experience is so Steve Jobs-esque
Your Splash Awards medals are spilling off your desk!

Your API documentation site,
Lets your developers get sleep at night.
You’re well known to eat your own dog food.
When you’re done with work you’re still in a good mood.
And why not? Your patches have all been applied,
The hackers can’t get in, and they even tried!

I heard a rumor at DrupalCon Seattle,
That your coding standard leaves code reviewers rattled.
Before you go saying that I must be kidding,
Know that I suck a peek, I did some digging…
I couldn’t find one curly brace out of place,
One line break too many or one poor stack trace.

Have you ever noticed just how our community
Is one of the best at promoting unity?
We’re very good at giving hugs,
Especially to those who have fixed our bugs.
But if you’re not a fan of physical affection,
We don’t make a big deal of your objection.
(We’re still glad you chose to sit in our section)

Like Birds of a Feather or a Business of Ferrets,
The Drupal community celebrates merits.
Even if you don’t pretend to know code,
Your contribution deserves to be showed.
Webchick will help you, as luck would have it,
When your first core improvement is ready to commit.
(Stick around until Friday, and you won’t regret it)

And if this is your first time at DrupalCon...

Did you know that Drupal is eighteen years young?
We’re just getting started, we’re still having fun!
Of all websites, we’re say, five in a hundred,
The more ambitious ones are quite well funded,
But even for NGOs our scopes are not blunted.
We’ve got thousands of community projects,
To extend, integrate, and meet all of your specs.
Even if you don’t speak like a geek,
I’m sure the vendors won’t judge you like some kind of freak.
They’ll help you translate the Greek, so to speak.

If this is your first event, or your hundred and first,
We welcome you to dive in to Drupal head first.
Bring us a challenge, we’ll find you a guru,
As long as you understand our Kool-Aid is blue.
(We do have other interests besides Open Source,
We’re human aren’t we? Why yes, of course!)

Have you met Dries Buytaert? He’s the project’s creator.
Without him, I might be a Red Lobster waiter.
He’s tall and Belgian, well over two meters,
(Not that I’ve measured him, that would be cheating),
He’s the lead of this project and he helps set the agenda
To give us some staying power and stay just a bit trendy.
There are really so many others to thank,
Given ten thousand thank you notes, none would be blank.
It takes a village, at least that’s what I think.

So welcome to the land of rainstorms and coffee,
Of mega-giant software companies,
To DrupalCon Seattle, it’s like my family reunion.
Think Digital. Be Human.

DrupalEasy News

Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Mar 31 2019
Mar 31

Progressive web apps (PWA) have made great inroads and one of the greatest examples of that can be seen through Twitter. As one of the most popular platforms to know what’s happening around the world and with millions of active users around the globe who consume, create and share data, Twitter has been a force to reckon with and has leveraged the power of PWA. In order to enhance their mobile web experience and make it faster, more reliable and more engrossing, it has built the Twitter Lite Progressive Web App. This extracts the best of the modern web and native features to offer instant loading, improved user engagement, and lower data consumption.

a mobile phone with the screen showing twitter app


Big names like Twitter are traversing the path of PWA. And that’s been the success story of PWA. By taking advantage of major advances in modern web browsers, web APIs and front-end frameworks, progressive web apps deliver stupendous app experiences to mobile and desktop users. Drupal, as a leading content management system, offers a wonderful platform for creating a progressive web app. Let’s take a brief look at PWA before moving on to Drupal’s capability.

Understanding Progressive Web App

It was the year 2007 when the iPhone came into the picture. That’s when the history of progressive web apps starts, states Venn.  This was, also, when Web 2.0 started moulding itself and the HTML5 standard was still being defined. Web pages started becoming more dynamic that altered the way we used the web and desktop devices. A number of PWA’s features are continuations in the development of those integral technologies.

Three screenshots from a mobile phone showing instagram appSource: freeCodeCamp

What are progressive web apps? You might have seen an ‘Add to Home Screen banner while browsing a website as depicted in the picture above. On clicking this button, the application installs itself in the background. Once the download is complete, this application sits in your app drawer. What you now have is a mobile application, a PWA, that did not require the services of an app store and was downloaded from the web application. Thus, a PWA enables you to install from the browser window itself and is available on phone like a native application and even works offline.

"Progressive web apps use modern web APIs along with traditional progressive enhancement strategy to create cross-platform web applications. These apps work everywhere and provide several features that give them the same user experience advantages as native apps." - MDN web docs

Coined by Alex Russel and Frances Berriman, PWA is a set of best practices for making a web application function in the same way as a desktop or mobile application would work. The idea is to have an experience that is uniform and boundless and the user is unable to differentiate between a progressive web app and a native mobile app.

Why build a Progressive Web App?

Eight hexagons containing icons resembling mobile phones, folder, arrows and desktopSource: Lambda Test

Progressive web apps come with progressive enhancement as a core tenet. So, it works for all the users no matter what their browser choice is. It is, also, fully responsive and works across platforms. Moreover, Web App Manifests enables your PWA to deliver the look and feel people expect and lets you specify an icon, app name and splash screen colour. It lets users install PWA on their device and have it appear alongside native apps.

Powered by service workers, PWA is connectivity independent thereby allowing it to be leveraged offline and low-quality networks. It also enables background data syncing. Service worker update process also keeps it up-to-date always. Because of W3C manifest and service worker registration scope, PWA is identifiable as an application and enables search engines to find it.

PWAs is also faster to load and install as is depicted in the image below.

Bar graph with green and blue coloured bars to represent progressive web apps usageSource: App Institute

The app shell model’s provision for separation of application functionality from application content makes PWA feel like a native app. PWAs offer top-notch security as they are served via HTTPS for negating snooping and making sure that the content is tamper-proof.

Progressive web app comes with push notification capabilities. The application can also be easily shared through an URL without the need for any intricate installation.

PWAs take less storage space as can be seen in the illustration below.

Illustration showing a mobile phone with drawers protruding out of it to explain progressive web appsSource: App Institute

Progressive web app with Drupal

Development of PWA can be done with front end frameworks like Angular, React, Polymer, Ionic etc. How can it be made possible with Drupal? Integration of Progressive web app with Drupal can be done with Progressive Web App module. Designed to work out of the box, this module enables you to add basic PWA functionality to your website. There are countless ways to customise the experience by writing your own service worker. But for basic offline functionality, this module is great and does not involve too much intricacy.

It leverages service worker and manifest.json for offering a more app-like experience on mobile devices. It requires the website to have a valid HTTPS to function well. It is part of the W3 specification that services workers only function on HTTPS.

The module helps in triggering ‘add to homescreen’ prompt automatically whenever a user visits your site. It, also, has a good Lighthouse audit score as well.

You can install this module by downloading and enabling the PWA module and the admin configuration have to be saved at least once before the visitors can start revelling in the merits of your new PWA.

Prompt pop up of progressive web apps Drupal module configuration showing message boxes and drop down options


What lies ahead?

Comscore states in its research study that 51% of users do not download any app in a month. There is a plethora of dead weight in the app store and developing brand new apps for Android, iOS and the web are not cost-effective and is also time-intensive.

Are progressive web apps the future of apps? With big names like Twitter and Forbes showing an inclination towards PWA, there is a definite rise of PWA to be seen in the coming years. In fact, Gartner predicts that progressive web apps will replace 50% of general-purpose, consumer-facing mobile applications by 2020.

A report of SBWire states that PWA market will grow at a Compound Annual Growth Rate (CAGR) of +10% between 2017 and 2025. Advancement in IT, the emergence of smart devices, enhanced awareness of updated technology among the people is touted to be factors in its growth.

Conclusion

Progressive Web App is a great way of offering an app-like experience to your website. Drupal can be a stupendous option of enhancing your site with a PWA.

We have been committed towards provision for ambitious digital experience through a suite of services. Talk to our Drupal experts at [email protected] and let us know how do you want us to be a part of your digital transformation journey.

Mar 31 2019
Mar 31

You may get involved in a coffee mishap on the way out of the door leaving a stain on your shirt. This is among the numerous stains that Tide’s Stain Remover, an Alexa skill, can help you remove.

Amazon Echo powered by Alexa, a plant pot, and a polaroid land camera placed close to each other


Voice assistants like Alexa are beginning to play a colossal part in our everyday lives. That is exactly why Tide, one of the largest producers of laundry products, has plunged in to utilise Alexa as a Stain removal expert. With the growing popularity of Amazon Alexa, organisations can consider the best ways to extend their omnichannel content strategy for including dissemination of content on voice and chat platforms. Integrating Alexa with Drupal, one of the leading content management systems in the market, can be great for allowing content to be accessed both via web and voice assistants.

Alexa: A quintessential voice assistant

As Amazon’s cloud-based voice service, Alexa is available on a plethora of devices from Amazon like Echo and third-party device manufacturers. It was named after the Library of Alexandria which attempted to collect all of the world’s knowledge. It lets you tell your wishes, at least the simple ones like playing music tracks and finding food recipes and fulfils them.

“With Alexa, you can build natural voice experiences that offer customers a more intuitive way to interact with the technology they use every day.” - Amazon Alexa

[embedded content]


Its collection of tools, APIs, reference solutions and documentation lets anyone build with Alexa. Creating cloud-based skills helps in disseminating content and reach customers via millions of Alexa-enabled devices. Alexa Skills Kit lets you build engrossing voice-first experiences. Moreover, Alexa Voice Service lets you develop voice-forward products through the incorporation of Alexa into your devices or controlling your devices with Alexa. You can even leverage it for your businesses by making it easy for users to access your services by voice.

Gartner predicts that because of the staggering advancements in emotion artificial intelligence (AI) systems, the personal devices will know more about an individual's emotional state. So, Alexa will get to know us more and more in the future and can even be able to detect and assess how we are feeling with the tone of our voice.

Alexa, also, leads in the market share of smart speakers as can be seen below.

Graphical representation showing 11 different blue and green coloured squares and rectangles arranged inside a big square explaining statistics on smart speakers


Amalgamation of Alexa and Drupal

A tweet by Dries Buytaert on Alexa and Drupal with the image of Dries on top left


Integration of Amazon Alexa and Drupal can be done with the help of Alexa Drupal module. For this, the Drupal website should be available online and using HTTPS. You can start by installing and enabling the Alexa module on the Drupal site. Then, a new Alexa skill can be created in Alexa Skills Kit. This is followed by the processes of copying the Application ID that is provided by the Amazon in ‘Skill information’ and submitting it to the Drupal site’s configuration. You can, then, move on to configuring Alexa skill in Alexa Skills Kit and creating a customised handler module for managing custom Alexa skills.

To demonstrate how this works, a digital agency used a sample supermarket chain called Gourmet Market and connected Alexa to its Drupal-powered site using Alexa module. A list of intents, that refers to the commands you want the users to run which is similar to Drupal’s routes, is specified. This is followed by the process of specifying a list of utterances that is basically the sentences that you want the Echo to react to. After the execution of the command, a webhook callback is received by the Drupal site and the Alexa module validates the request.

[embedded content]


Suppose if you ask Alexa about the fruits that are on sale, Alexa would make a call to the Gourmet Market Drupal site and come up with the relevant information. Certain items can also be tagged as ‘On Sale’ by the store manager and the same changes are automatically and swiftly reflected by Alexa’s voice responses. And the best part is that the marketing manager won’t require any programming skills as the Alexa forms its voice responses by talking to Drupal 8 via web service APIs.

The site could also deliver smart notifications. When posing a question enquiring about an item that is not on sale, the site can automatically notify the user through text once the store manager puts the tag of ‘On Sale’ on it.

The digital agency showed another example of a combination of Alexa and Drupal through a fictional grocery store called Freshland Market. Here, a user chooses a food recipe from Freshland Market’s Drupal site and collects all the ingredients to go ahead with the cooking process. The food recipe asked by the user is for 8 people but the site has the same for 4 people. The Freshland Market Alexa skill, by itself, adjusts the number of ingredients for 8 people. So, amidst a series of questions and the relevant ingredients and cooking steps, the user is easily able to prepare the food without having to look at the laptop or mobile phone.

[embedded content]


Conclusion

Coming together of Alexa and Drupal can be a great solution for removing friction from user experiences. With Drupal as a stupendous content store and Alexa as a quintessential voice assistant, you can bring about a world of difference.
 
We believe in open source innovation and are committed to offering great digital experiences with our expertise in Drupal development. Talk to our Drupal experts at [email protected] and let us know how do you want us to be a part of your digital transformation endeavours.

Mar 29 2019
Mar 29

DrupalCon Seattle is next week! We’re excited to get together with the community for learning and collaborations.

But first, we have to travel to Seattle. We’re so excited about it that we made a Spotify playlist made up of all Seattle bands.

So much great music has come from Seattle, you’re bound to find something you like.

We’re busy at DrupalCon with summits, sessions, community work, and more. Come visit us at booth #306, or check out where we will be below. Either way, come say hello.

Summits

  • Monday April 8, 8:30 am – 12:30 pm: Anne Stefanyk is joining Pantheon at Selling to the Marketing Buyer.
  • Tuesday, April 9, 9:00 am – 4:30 pm:  Anne will be leading an afternoon breakout session at the Nonprofit Summit,
  • Tuesday, April 9, 10:00 am – 4:30 pm: AmyJune Hineline will be leading the Community Summit.

Community

Our community liaison AmyJune will be staffing the at the Core Mentoring booth on Wednesday and Thursday. She’ll also be doing two workshops:

Sessions

Kanopians are speaking at three sessions:

Birds of a Feather (BOFs)

BOFs are a great way to have intimate discussions on topics, and collaborating with peers is one of our favorite things.

  • Wednesday 11:00 am: AmyJune Hineline is hosting one on SimplyTest.me.
  • Wednesday 4:45 pm: Sean Dietrich is hosting one on Docksal.
  • Thursday 2:30 pm:, Jim Birch is hosting the Drupal and SEO BOF.

Collaborations

Each Kanopian will be collaborating all week long while at DrupalCon! Keep a look out for Jim, Jason, Cindy, Kat, AmyJune, and Sean as they join other Drupalers to help push the Drupal project forward.

Looking forward to seeing you there!

Mar 29 2019
Mar 29

DrupalCon Seattle is next week! We’re excited to get together with the community for learning and collaborations.

But first, we have to travel to Seattle. We’re so excited about it that we made a Spotify playlist made up of all Seattle bands.

So much great music has come from Seattle, you’re bound to find something you like.

We’re busy at DrupalCon with summits, sessions, community work, and more. Come visit us at booth #306, or check out where we will be below. Either way, come say hello.

Summits

  • Monday April 8, 8:30 am – 12:30 pm: Anne Stefanyk is joining Pantheon at Selling to the Marketing Buyer.
  • Tuesday, April 9, 9:00 am – 4:30 pm:  Anne will be leading an afternoon breakout session at the Nonprofit Summit,
  • Tuesday, April 9, 10:00 am – 4:30 pm: AmyJune Hineline will be leading the Community Summit.

Community

Our community liaison AmyJune will be staffing the at the Core Mentoring booth on Wednesday and Thursday. She’ll also be doing two workshops:

Sessions

Kanopians are speaking at three sessions:

Birds of a Feather (BOFs)

BOFs are a great way to have intimate discussions on topics, and collaborating with peers is one of our favorite things.

  • Wednesday 11:00 am: AmyJune Hineline is hosting one on SimplyTest.me.
  • Wednesday 4:45 pm: Sean Dietrich is hosting one on Docksal.
  • Thursday 2:30 pm:, Jim Birch is hosting the Drupal and SEO BOF.

Collaborations

Each Kanopian will be collaborating all week long while at DrupalCon! Keep a look out for Jim, Jason, Cindy, Kat, AmyJune, and Sean as they join other Drupalers to help push the Drupal project forward.

Looking forward to seeing you there!

Presentation Videos


Deep Cleaning: Creating Franchise Model Efficiencies with Drupal 8

Presenters: Anne Stefanyk and Katherine White

COIT offers cleaning services and 24/7 emergency restoration services and their 100+ locations serve more than 12 million homes & businesses across the United States and Canada. But their own website was a huge mess. In this case study we will cover the more technical parts of this Drupal 8 implementation.

[embedded content]


How to Work Remotely and Foster a Happy, Balanced Life

Presenters: Anne Stefanyk

In this session, we will talk about how to be the best remote employee, and will also provide strategies and ideas if you are a leader of a remote team. We will talk about key tactics to keep you (and all other staff) inspired, creative, productive and most importantly, happy!

[embedded content]

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web