Oct 22 2016
Oct 22

Developers have many tools. We have version control systems, we have dependency management tools, we have build and task automation tools. What is one thing they all have in common? They are command line tools.

However, the command line can be daunting to some and create a barrier to adoption. We’re currently experiencing this in the Drupal community. People are considering Composer a barrier to entry, and I believe it’s just because it’s a command line tool and something new.

Providing a user interface

User interfaces can make tools more usable. Or at least lower the barrier to entry. I like to think of Atlassian's SourceTree. SourceTree is an amazing Git user interface and was my entry into learning how to use Git.

If you work with Java, your IDE provides some sort of user interface for managing your dependencies via Maven.

The PhpStorm IDE provides rudimentary Composer support - initializing a project and adding dependencies - but doesn’t support entire workflows. It’s also proprietary.

Here’s where I introduce Conductor: a standalone Composer user interface built on Electron. The purpose of Conductor is to give users working with PHP projects an interface for managing their projects outside of the command line.

Hello, Conductor

Conductor interfaces

Conductor provides an interface for:

  • Creating a new project based on a Composer project
  • Managing projects to install, update, add, and remove dependencies
  • View dependencies inside of a project
  • Ability to update or remove individual dependencies by reviewing them inside of the project.
  • Run Composer commands from the user interface and review console output.

The project is in initial development, granted by the downtime the Dyn DDoS attack created.

The initial application is now in an above minimal viable product. It works. It can be used. But now it needs feedback from users who feel a barrier by having to use Composer, and code improvements as well.

Head over to GitHub and check out the project: Currently, there are no prebuilt binaries and you'll need to manually build to try it out.

Oct 21 2016
Oct 21
     This blog describes how to add date pop-up calender to a custom form in the Drupal 7.

Use date pop-up calendar in custom form - drupal 7

     The use case is if you want to use date pop-up calendar in a custom form, then how you can do it in the drupal 7. Actually, the drupal 7 form API provides lots of form types like textfield, checkbox, checkboxes etc to create a custom form. Similarly, the date module  also provides the form type called date_popup. We can use it in the custom form in order to display the date pop-up in the custom form.

Use date pop-up calendar with the custom form in drupal 7:

   Let consider the below code snippet:

function phponwebsites_menu() {
  $items = array();

  $items['customform'] = array(
    'title' => t('Custom Form'),
    'type' => MENU_CALLBACK,
    'page callback' => 'drupal_get_form',
    'page arguments' => array('phponwebsites_display_date_popup_form'),
    'access callback' => TRUE,

  return $items;

function phponwebsites_display_date_popup_form($form, &$form_state) {
  $form['date'] = array(
    '#type' => 'date_popup',
    '#default_value' => date('Y-m-d'),
    '#date_format'   => 'Y-m-d',
    '#date_year_range' => '0:+5',
    '#datepicker_options' => array('minDate' => 0, 'maxDate' => 0),

  return $form;

      '#date_format'   => 'Y-m-d' if you need to display only date
      '#date_format'   => 'Y-m-d H:i:s' if you need to display date & time
      '#date_year_range' => '0:+5' if you need to display only future 5 years
      '#datepicker_options' => array('minDate' => 0, 'maxDate' => 0) if you want to display only current date. We can hide the future & past dates using this option.

   Please add the above code into your module file and look into the "customform" page. It looks like the below image:

Display only current date in date -pop-up - drupal 7

   Now I've hope you know how to add date pop-up calendar with custom form in the drupal 7.

Related articles:
Remove speical characters from URL alias using pathauto module in Drupal 7
Add new menu item into already created menu in Drupal 7
Add class into menu item in Drupal 7
Create menu tab programmatically in Drupal 7
Add custom fields to search api index in Drupal 7
Clear views cache when insert, update and delete a node in Drupal 7
Create a page without header and footer in Drupal 7
Login using both email and username in Drupal 7
Disable future dates in date pop-up calendar Drupal 7
Oct 21 2016
Oct 21

Last month saw DrupalCon arrive in Dublin – the first European DrupalCon since the release of Drupal 8.

The Ixis team were out in force, exhibiting as a gold sponsor (along with a small army of Druplicon stress-balls) As well as meeting lots of other Drupal developers and businesses deploying open source infrastructure, one of the highlights of the convention was our Technical Director, Mike Carter, being given the opportunity to co-present a Drupal Showcase session, detailing our work with the British Council.

Mike Carter at Drupalcon

(image thanks to Paul Johnson)

With locations on six continents and in more than 100 countries, the British Council is the UK’s international organisation for cultural relations and educational opportunities. Its web infrastructure includes a global site, approximately 110 country websites, 10-15 project websites and various smaller micro-sites.

In 2015, in an effort to improve performance and reliability, the decision was taken to adopt a container-based hosting for this network of sites. After a competitive procurement process we led the migration of the entire network to containers hosted on infrastructure provided by our partner

If you didn’t make it to DrupalCon Dublin or want a recap on the showcase session, our full presentation can be viewed below or you can read the full case study here

Oct 21 2016
Oct 21

Republished from

Nasdaq chooses Drupal 8

I wanted to share the exciting news that Nasdaq Corporate Solutions has selected Drupal 8 as the basis for its next generation Investor Relations Website Platform. About 3,000 of the largest companies in the world use Nasdaq's Corporate Solutions for their investor relations websites. This includes 78 of the Nasdaq 100 Index companies and 63% of the Fortune 500 companies.

What is an IR website? It's a website where public companies share their most sensitive and critical news and information with their shareholders, institutional investors, the media and analysts. This includes everything from financial results to regulatory filings, press releases, and other company news. Examples of IR websites include http://investor.starbucks.com and -- all three companies are listed on Nasdaq.

All IR websites are subject to strict compliance standards, and security and reliability are very important. Nasdaq's use of Drupal 8 is a fantastic testament for Drupal and Open Source. It will raise awareness about Drupal across financial institutions worldwide.

In their announcement, Nasdaq explained that all the publicly listed companies on Nasdaq are eligible to upgrade their sites to the next-gen model "beginning in 2017 using a variety of redesign options, all of which leverage Acquia and the Drupal 8 open source enterprise web content management (WCM) system."

It's exciting that 3,000 of the largest companies in the world, like Starbucks, Apple, Amazon, Google and ExxonMobil, are now eligible to start using Drupal 8 for some of their most critical websites. 

Oct 20 2016
Oct 20

Drupal 8 is here which means I have had the privilege of working on my first D8 projects and the migrations that accompany them. I wanted to share some of the key findings I’ve taken away from the experience.

Migrate in Drupal 8 is awesome as long as you know what you are looking at. It is flexible, powerful and relatively easy to read. But as is the case with most things, a lot of its power is tucked away where it is hard to find if you don't know where to look. This is definitely the case with Migrate Plus XML data process plugin which is presently available only in the dev version of Migrate Plus. It is a pretty solid tool for migrating from a variety of XML based sources and today we are going to talk about how to use it.

The first thing we have to consider is where our data is coming from. Migrate plus expects to have this information fed to it in the form of a url which gives us two options:

  1. our source is from outside the website, like an rss feed; or
  2. it is stored locally.

If you have an external url, all you need to do is plug it into the url’s parameter. If your source is stored locally, you will either need to construct a url for the source or store it in the private file directory, using the private:// stream wrapper. I would go for the latter as it involves less overhead. At this point your migration source should look something like this:

   plugin: url    
   data_fetcher_plugin: http    
   data_parser_plugin: xml   
   urls: private://migration.xml

This brings us to parsing out the XML. All of the selectors we will be talking about are using xpath. The first thing you need to do is define the item selector so migrate can identify the individual items to migrate into your choose destination. For example, if we were migrating posts from a WordPress export it might look something like this:

item_selector: /rss/channel/item[wp:post_type="post"]

Next up we need to map all of our fields to nice, readable machine names that we can use in the process part of the migration. Each field will have a name that will identify it in other parts of the migration, a label for describing what sort of data we will find in that XML element, and a selector so the migration can map that data from the xml file:

    name: title
    label: Content title
    selector: title
    name: post_id
    label: Unique content ID
    selector: wp:post_id
    name: content
    label: Body of the content
    selector: content:encoded
    name: post_tag
    label: Tags assigned to the content item
    selector: 'category[@domain="post_tag"][email protected]'

If you are using anything more complicated than the XML node names, you will need to wrap the selector as a string. The selectors are being passed to xpath in the data processor, so you can get pretty precise in selecting XML nodes.

All that is left to do is define the migration id and you have your source all ready to go:

     type: integer

Put it all together and you should have something that looks something like this:

   plugin: url    
   data_fetcher_plugin: http    
   data_parser_plugin: xml   
   urls: private://migration.xml
   item_selector: /rss/channel/item[wp:post_type="post"]
       name: title
       label: Content title
       selector: title
       name: post_id
       label: Unique content ID
       selector: wp:post_id
       name: content
       label: Body of the content
       selector: content:encoded
       name: post_tag
       label: Tags assigned to the content item
       selector: 'category[@domain="post_tag"][email protected]'
           type: integer

A note on prefixed namespaces: you can see we mixed XML nodes that have prefixes with those that don’t. Sometimes Migrate handles this with no problem at all; sometimes it refuses to fetch data from XML nodes that don’t have prefixes. As far as I can tell, it does this when one of the nodes in the item_selector has a prefix (although it doesn’t seem to have this problem with the filters in the item_selector). If you should have a datasource with a parent prefixed node, you can still get non-prefixed children by using the following syntax:

name: description
label: Content description
selector: '*[local-name()="description"]'

It will allow you to select XML nodes with a given local name regardless of the prefix, which is very handy when you have no prefix at all.

Stay connected with the latest news on web strategy, design, and development.

Sign up for our newsletter.
Oct 20 2016
Oct 20
Website navigation is something you probably use every day but don’t think too much about. This is how you travel from page to page within a website. It is probably the most used part of your website that you spend the least amount of time evaluating, right? I used to feel the same way. A few years ago, I inherited a site navigation that seemed to be working so my team focused on growing other areas of the site. Looking into how our menu was organized was low priority.
Oct 20 2016
Oct 20

When Tag1 decided to build Tag1 Quo, we knew there was one question we’d have to answer over, and over, and over again: is there a security update available for this extension? Answering that question - at scale, for many websites, across many extensions, through all the possible versions they might have - is the heart of what Quo does.

The problem seems simple enough, but doing it at such scale, for “all” versions, and getting it right, has some deceptive difficulties. Given a site with an extension at a particular version, we need to know where it sits on the continuum of all versions that exist for that extension (we often refer to that as the “version universe,”), and whether any of the newer versions contain security fixes.

There are a few different approaches we could’ve taken to this problem. The one we ultimately settled on was a bit more abstracted than what might initially seem necessary. It was also not a “typical” Drupal solution. In this blog series, I’ll cover both the theoretical foundation of the problem, and the approach we took to implementation.

What’s a version?

Let's start at the beginning.

Quo works by having existing Drupal 6 (for now!) sites install an agent module, which periodically sends a JSON message back to the Quo servers indicating what extensions are present on that site, and at what versions. Those “versions” are derived from .info files, using functions fashioned after the ones used in Drupal core. Once that JSON arrives at Quo’s servers, we have to decide how to interpret the version information for each extension.

All versions arrive as strings, and the first decision Quo has to make is whether that string is “valid” or not. Validation, for the most part, means applying the same rules as applied by the release system. Let’s demonstrate that through some examples:


Drupal extension versions are a bit different than most software versions in the wild, because the first component, 6.x, explicitly carries information about compatibility, not with itself, but with its ecosystem: it tells us the version of Drupal core that the extension is supposedly compatible with.

The next part, 1.0, holds two bits of information: the major (1) and minor (0) versions. It’s wholly up to the author to decide what numbers to use there. The only additional meaning layered on is that releases with a 0 major version, like 6.x-0.1, are considered to be prerelease, and thus don’t receive coverage from the security team. Of course, Quo’s raison d’etre is that Drupal 6 is no longer supported by the security team, so that distinction doesn’t really matter anymore.

Now, 6.x-1.0 is an easy example, but it doesn’t cover the range of what’s possible. For example:


This adds the prerelease field - alpha1. This field is optional, but if it does appear, it indicates the version to be some form of prerelease - and thus, as above, not covered by the security team.’s release system also places strong restrictions on the words that can appear there: alpha, beta, rc, and unstable. Additionally, the release system requires that the word must be accompanied by a number - 1, in this case.

There are a couple of notably different forms that valid Drupal versions can come in. There can be dev releases:


Dev releases can only have the compatibility version and a major version, and cannot have a prerelease field. They’re supposed to represent a “line” of development, where that line is then dotted by any individual releases with the same major version.

And of course, Drupal core itself has versions:


Core versions have the same basic structure as the major version/minor version structure in an extension. Here, the 43 is a minor version - a dot along the development line of the 6 core version.

These examples illustrate what’s allowed by the release system, and thus, the shape that all individual extensions’ version universe will have to take. All together, we can say there are five discrete components of a version:

  • Core version
  • Major version
  • Minor version
  • Prerelease type
  • Prerelease number

Viewed in this way, it’s a small step to abstracting the notion of version away from a big stringy blob, and towards those discrete components. Specifically, we want to translate these versions into a 5-dimensional coördinate system, or 5-tuple: {core, major, minor, prerelease_type, prerelease_num}, where each of these dimensions has an integer value. Four of the components are already numbers, so that’s easy, but prerelease type is a string. However, because there’s a finite set of values that can appear for prerelease type, it’s easy to map those strings to integers:

  • Unstable = 0
  • Alpha = 1
  • Beta = 2
  • Rc = 3
  • (N/A - not a prerelease) = 4

With this mapping for prerelease types, we can now represent 6.x-1.0-alpha1 as {6,1,0,1,1}, or 6.x-2.3 as {6,2,3,4,0}.

However, that’s not quite the end of the story. The Quo service is all about delivering on the Drupal 6 Long Term Support (D6LTS) promise: providing and backporting security fixes for Drupal 6 extensions, now that they’re no longer supported by the security team.

Because such fixes are no longer official, we can’t necessarily expect there to be proper releases for them. At the same time, it’s still possible for maintainers to release new versions of 6.x modules, so we can’t just reuse the existing numbering scheme - the maintainer might later release a conflicting version.

For example, if D6LTS providers need to patch version 6.x-1.2 of some module, then we can’t release the patched version as 6.x-1.3, because we don’t have the authority to [get maintainers to] roll official releases (we’re not the security team, even though several members work for Tag1), and the maintainer could release 6.x-1.3 at any time, with or without our fix. Instead, we have to come up with some new notation that works alongside the existing version notation, without interfering with it.

Converting to the coördinate system gives us a nice tip in the right direction, though - we need a sixth dimension to represent the LTS patch version. And “dimension” isn’t metaphorical: for any given version, say {6, 1, 0, 1, 1} (that is, 6.x-1.0-alpha1), we may need to create an LTS patch to it, making it {6, 1, 0, 1, 1, 1}. And then later, maybe we have to create yet another: {6, 1, 0, 1, 1, 2}.

Now, we also have to extend the string syntax to support this sixth dimension - remember, strings are how the agent reports a site’s extension versions! It’s easy enough to say “let’s just add a bit to the end,” like we did with the coördinates, but we’re trying to design a reliable system here - we have to understand the implications of such changes.

Fortunately, this turns out to be quite easy: {6, 1, 0, 1, 1, 1} becomes 6.x-1.0-alpha1-p1; {6, 2, 3, 4, 0, 1} becomes 6.x-2.3-p1. This works well specifically because the strings in the prerelease type field are constrained to unstable, alpha, beta, and rc - unlike in semver, for example:

     A pre-release version MAY be denoted by appending a hyphen and a series
     of dot separated identifiers immediately following the patch version.
     Identifiers MUST comprise only ASCII alphanumerics and hyphen

     ...identifiers consisting of only digits are compared numerically and
     identifiers with letters or hyphens are compared lexically in ASCII
     sort order.

In semver, prerelease information can be any alphanumeric string, can be repeated, and are compared lexicographically (that is, alpha < beta < rc < unstable). If Drupal versions were unbounded in this way, then a -p1 suffix would be indistinguishable from prerelease information, creating ambiguity and making conflicts possible. But they’re not! So, this suffix works just fine.

Now, a coordinate system is fine and dandy, but at the end of the day, it’s just an abstracted system for representing the information in an individual version. That’s important, but the next step is figuring out where a particular version sits in the universe of versions for a given extension. Specifically, the question Quo needs to ask is if there’s a “newer” version of the component available (and if so, whether that version includes security fixes). Basically, we need to know if one version is “less” than another.

And that’s where we’ll pick up in the next post!

Oct 20 2016
Oct 20
Acquia Lift for Content Syndication is an Acquia product that allows you to have a central repository for your content and syndicate it out to various connected sites. All the connected sites are able to send content to the repository for use on any other connected site. In an ideal world, all the connected sites would have matching content type and field machine names. However, in our reality, we had two existing sites where this was not true.
Oct 20 2016
Oct 20

Let me tell you a story. When I joined Acquia in April 2011, the Support group was a small pool of passionate and talented drupalists working day and night to service our customers. And there was Kenny, aka webkenny. The vocal, outspoken and hilarious personality that was going to accompany Tim Millwood and I every morning when we were holding down the fort during EMEA hours, as the company was scaling up.


So, yesterday evening, when we received an email informing us Kenny had passed away, this brought back all those memories, and a deep sadness, because we all fought closely together to make things right and help each other.

It is with great sadness that we learned today that Kenny Silanskas has passed away. This is a terrible loss for Kenny’s family, for us at Acquia, and for the the Drupal community.

Kenny joined Acquia in 2009 and was instrumental into building the Acquia spirit and bringing the energy needed to wake up everyday and start all over again. Not without a lot of laughter.

[embedded content]

...and certainly not without killing it on stage.

[embedded content] [embedded content]

When he decided to take on another challenge, I recommended him on Linkedin and wouldn't change a word today:

Kenny is a natural-born leader who has a passion for sharing knowledge and moving mountains. From Client Advisor to Technical Account Manager and then Support Team Manager, he has always gained respect from his peers both technically and by his positive and energetic attitude. Today I'm better at what I do thanks to the high standards of customer satisfaction Kenny taught me about.

Because, yes, Kenny was quite a character, a generous and kind-hearted man, but also an excellent troubleshooter and technical lead. Maybe you remember his epic DrupalCon London presentation with the 8-bit Dries?

[embedded content]

Life's full of surprises. Kenny had just come back to Acquia. We were so happy. But unfortunately, at the end of September he suddenly resigned. I didn't really understand what was happening and didn't ask questions. So, when he last tweeted a few days ago, I obviously didn't imagine this would mean so much, after all.

Life is not a series of unfortunate events, but change that evolves you with each challenge. And no, that's mine. Not Confucius.

— Kenny Silanskas (@webkenny) October 17, 2016

Acquia's co-founder sums it best.

My heart goes out to the family of @webkenny. He was a force of nature on this earth, and I lament his passing. RIP Kenny. Sing with angels.

— Jay Batson (@jab) October 20, 2016

Never forget that smile. You will be sorely missed, my friend.


A GoFundMe has been set up in his memory. Please consider donating.

Oct 20 2016
Oct 20

Android Permission Provisioning has changed recently, If an app is using Android SDK API level 22 or below, users are asked for all the permissions in bulk at the time of installation i.e. Without granting permission user can not install the app. Now with Android 6's security patch called Run Time Permission (API level 23 and above) the app while in use can request for specific permission when the need arise ( similar to iOS ) e.g. you will be asked for the location's permission when you actually try to access the location of the device.

To work with runtime permissions on Ionic, you would need to install Cordova diagnostic plugin which gives you the function to fetch the status of the native api's exposed to the app. If a certain permission is mandatory for you app you can prompt the user to grant access to proceed. Further you have specific functions for granting permissions.

Install the plugin

cordova plugin add cordova.plugins.diagnostic

To avail the features of Run Time Permission, you have to build the app with Android platform 6.

Check you current Android platform version

ionic platform

If the version of your Android's Platform is below 5, you will need to update it to 5 or above.

Remove android platform:

ionic platform remove android

Install Android platform version 5 or above:

ionic platform add [email protected]

In config.xml, set the target of sdk version to 23

<preference name="android-targetSdkVersion" value="23" />

So far we are all set to ask user's for permission on the fly. We will have to call a functions to fetch the status of particular permission for the app. On the basis of the status we will ask the user to grant permissions or ignore it.

Add the following function in your app.js in $ionicPlatform.ready() function.

This will make $rootScope.checkPermission() global and you can call it whenever you wish to check if the user has given the permission to fetch device's location.

$rootScope.checkPermission = function() {
  setLocationPermission = function() {
    cordova.plugins.diagnostic.requestLocationAuthorization(function(status) {
      switch (status) {
        case cordova.plugins.diagnostic.permissionStatus.NOT_REQUESTED:
        case cordova.plugins.diagnostic.permissionStatus.DENIED:
        case cordova.plugins.diagnostic.permissionStatus.GRANTED:
        case cordova.plugins.diagnostic.permissionStatus.GRANTED_WHEN_IN_USE:
    }, function(error) {}, cordova.plugins.diagnostic.locationAuthorizationMode.ALWAYS);
  cordova.plugins.diagnostic.getPermissionAuthorizationStatus(function(status) {
    switch (status) {
      case cordova.plugins.diagnostic.runtimePermissionStatus.GRANTED:
      case cordova.plugins.diagnostic.runtimePermissionStatus.NOT_REQUESTED:
      case cordova.plugins.diagnostic.runtimePermissionStatus.DENIED:
      case cordova.plugins.diagnostic.runtimePermissionStatus.DENIED_ALWAYS:
  }, function(error) {}, cordova.plugins.diagnostic.runtimePermission.ACCESS_COARSE_LOCATION);
Run time location permission

Here is a link of the code snippet.

Of course, you can choose to skip all this and stick to sdk target version 22, but you will miss out the new cool feature of Android 6 and amazing user experience. 

Oct 20 2016
Oct 20

I saw a post recently about another Drupal 8 site that got launched. It was Rainforest Alliance. Nothing special at first. But then, out of curiosity, I clicked on it and checked it. While I was admiring the impressive pictures of the forests, I suddenly remembered that over a month ago I read an article about city of Boston launching its website on Drupal. Then it hit me. Who are the »big names« that Drupal can show off to the world and say 'These are our most proud members'?

As you may have heard or read or anything else, Drupal is a free and open-source content-management framework that provides a back-end framework for at least 2.2% of all web sites worldwide. To put it in numbers, that's more than a million websites that run on Drupal worldwide. They can be found in form of personal blogs and all the way down to the corporate, political and governmental websites. on Drupal


You can be sure that every government, political party or personal blog doesn't have its Drupal site modified up to date and not everyone is run on its latest version of Drupal, known as Drupal 8. Nobody minds that. However, what are the criteria that labels a company, political party or anything else as a »big name«? Well, we could find a lot of them from a lot of different sources, but I'll just take into account reputation, size and its impact on its field of activity.

Drupal in sports

As a huge sports fun I'll start with sports. I, there is no other way, watched this year's Olympic games as much as I could (the events were, to be fair, not held at the most optimum time for us Europeans to watch). If I didn't watch an event in Rio on television, I checked its outcome on Olympics’ website, which was brought to us by Drupal. Moreover, Drupal provides the Americans … pardon … all internet users, with the information from NBA, Major League Soccer (MLS), NCAA and some NFL teams like Dallas Cowboys.

Rio on Drupal

Drupal in Public sector

Besides city of Boston, cities of London, Austin and Los Angeles all use Drupal. In America some states even use it, like the State of New York and a State of Georgia. We learned that the White House began using it a long time ago and so does the Australian and Belgium government. United Nations, NATO and NASA are also ones not to be overlooked.

White House on Drupal

Drupal in Entertainment and Technology

Drupal is also very active in the entertainment 'sector', where it provides its services for Sony, MTV, PlayStation, Fox, Al Jazeera, NBC, The Economist and Puma. We may think that in cases of high technology Drupal is absent, but it's right the opposite. Tesla Motors, Sensio Labs, Cisco, Amazee Labs, eBay, Twitter and Pinterest are just some of the websites empowered by Drupal.

If you would like to see more websites that use Drupal, you can check Dries Buytaert's list and then potentially show off your Drupal site if you have it.

Oct 20 2016
Oct 20

The Paragraphs module is a very good alternative to a WYSIWYG editor for those who wants to allow Drupal users ans editors to make arrangements of complex pages, combining text, images, video, audio, quote, statement blocks, or any other advanced component.

Rather than let the user to struggle somehow with the text editor to make advanced pages, but never able to reach the level made possible with Paragraphs, we can offer him to compose his page with structured content’s components, each of these components being responsible for rendering the content, according to the settings selected, in a layout keeper under control.

Cite one example among dozens of others (possibility are endless). Rather than offering a simple bulleted list from the text editor, we can create a component that can generate a more refined bulleted list : each item in the bulleted list could for example have a pictogram, a title, a brief description and a possible link, and the content publisher could simply select the number of elements he wishes per row. For example a report of this kind.

Une liste à puces mise en forme avec paragraphs

Offer these components enables an inexperienced user to create complex page layouts, with the only constraint to focus on its content, and only its content.

The different possibles form mode availables for paragraph components

We have several options to display, inside the content edit form, the components created with Paragraphs. we can show them in :

  • Open mode: the edit form of the Paragraph component is open by default
  • Closed mode: the edit form of the Paragraph component is closed by default
  • Preview mode: the paragraph component is displayed as rendered on the front office

Paramètres d'affichage du formulaire d'un composant paragraph

I tend to prefer to retain the default closed mode, on content edit form, to improve editor’s experience. Because if the page consists of many components (and it’s the purpose of Paragraphs module), in open mode the content’s edit form tends to scare the user as the number of form may be important, and also makes it very difficult reorganization of the different components (order change), while the pre-visualization method involves either to integrate theses components inside the administration theme or to opt for using default theme when editing content.

The disadvantage of using the closed mode for editing components

The use of closed mode provides an overview of the different components used on the page (the content), and to rearrange them easily with simple drag / drop. Modification of the various components available is done by uncollapse / collapse them on demand.

Formulaire d'édition d'un contenu composé de paragraphs

With this editing mode, the content’s components are listed and have for title the paragraph’s type used. This can be a major drawback if the content uses many components of the same type, the publisher does not have immediate cues to distinguish which content relates to each component.

Modify the label of paragraph components

We can overcome this issue by creating a small module, which will be responsible for changing the label of each component by retrieving the contents of certain fields of our paragraphs.

The general idea is to alter the content edit form, detect if content contains entity reference revision fields (used by paragraphs), and if so, to recover for each paragraph the value of a field (eg a field whose machine name contains the word title), then change the label used in the edit form for each paragraph with this value.

Let's go to practice and PHP snippet. We will implement hook_form_alter().

 * Implements hook_form_alter().
function MYMODULE_form_alter(&$form, FormStateInterface $form_state, $form_id) {
  $form_object = $form_state->getFormObject();

  // Paragraphs are only set on ContentEntityForm object.
  if (!$form_object instanceof ContentEntityForm) {

  /** @var \Drupal\Core\Entity\FieldableEntityInterface $entity */
  $entity = $form_object->getEntity();
  // We check that the entity fetched is fieldable.
  if (!$entity instanceof FieldableEntityInterface) {

  // Check if an entity reference revision field is attached to the entity.
  $field_definitions = $entity->getFieldDefinitions();
  /** @var \Drupal\Core\Field\FieldDefinitionInterface $field_definition */
  foreach ($field_definitions as $field_name => $field_definition) {
    if ($field_definition instanceof FieldConfigInterface && $field_definition->getType() == 'entity_reference_revisions') {
      // Fetch the paragrahs entities referenced.
      $entities_referenced = $entity->{$field_name}->referencedEntities();
      /** @var \Drupal\Core\Entity\FieldableEntityInterface $entity_referenced */
      foreach ($entities_referenced as $key => $entity_referenced) {
        $fields = $entity_referenced->getFieldDefinitions();
        $title = '';
        $text = '';
        foreach ($fields as $name => $field) {
          if ($field instanceof FieldConfigInterface && $field->getType() == 'string') {
            if (strpos($name, 'title') !== FALSE) {
              $title = $entity_referenced->{$name}->value;
            // Fallback to text string if no title field found.
            elseif (strpos($name, 'text') !== FALSE) {
              $text = $entity_referenced->{$name}->value;
        // Fallback to $text if $title is empty.
        $title = $title ? $title : $text;
        // Override paragraph label only if a title has been found.
        if ($title) {
          $title = (strlen($title) > 50) ? substr($title, 0, 50) . ' (...)' : $title;
          $form[$field_name]['widget'][$key]['top']['paragraph_type_title']['info']['#markup'] = '<strong>' . $title . '</strong>';



Let us review in more detail what we do in this alteration.

First we check that we are on a content entity form, and the entity that we are currently editing is fieldable.

$form_object = $form_state->getFormObject();

// Paragraphs are only set on ContentEntityForm object.
if (!$form_object instanceof ContentEntityForm) {

/** @var \Drupal\Core\Entity\FieldableEntityInterface $entity */
$entity = $form_object->getEntity();
// We check that the entity fetched is fieldable.
if (!$entity instanceof FieldableEntityInterface) {

We check then all the entity’s fields (a node, a content block, or any other content entity) and only treat the entity_reference_revisions  field type that correspond to the field implemented and used by Paragraphs module.

// Check if an entity reference revision field is attached to the entity.
$field_definitions = $entity->getFieldDefinitions();
/** @var \Drupal\Core\Field\FieldDefinitionInterface $field_definition */
foreach ($field_definitions as $field_name => $field_definition) {
  if ($field_definition instanceof FieldConfigInterface && $field_definition->getType() == 'entity_reference_revisions') {
    // Fetch the paragrahs entities referenced.
    $entities_referenced = $entity->{$field_name}->referencedEntities();
    /** @var \Drupal\Core\Entity\FieldableEntityInterface $entity_referenced */
    foreach ($entities_referenced as $key => $entity_referenced) {

      // Stuff.



For each detected Paragraph entities we will then retrieve the value of a field. In our example, we test first if this is a text field type (string), then test whether its machine name contains the word title, or the word text which will serve as fallback if no field containing title in its machine name is found.

$fields = $entity_referenced->getFieldDefinitions();
$title = '';
$text = '';
foreach ($fields as $name => $field) {
  if ($field instanceof FieldConfigInterface && $field->getType() == 'string') {
    if (strpos($name, 'title') !== FALSE) {
      $title = $entity_referenced->{$name}->value;
    // Fallback to text string if no title field found.
    elseif (strpos($name, 'text') !== FALSE) {
      $text = $entity_referenced->{$name}->value;

This example is of course to adapt according to your own context. We could, for example, precisely target a specific field based on the type of paragraph detected. For example :

$bundle = $entity_referenced->bundle();
$title = '';
$text = '';
switch ($bundle) {
  case 'paragraph_imagetext':
    $title = $entity_referenced->field_paragraph_imagetext_title->value;
  case 'other_paragraph_type':
    $title = $entity_referenced->another_field->value;

Finally, we replace the label used by the paragraph type, if we have got a value for our new label.

// Fallback to $text if $title is empty.
$title = $title ? $title : $text;
// Override paragraph label only if a title has been found.
if ($title) {
  $title = (strlen($title) > 50) ? substr($title, 0, 50) . ' (...)' : $title;
  $form[$field_name]['widget'][$key]['top']['paragraph_type_title']['info']['#markup'] = '<strong>' . $title . '</strong>';

A happy user

The result then allows us to offer content publishers and editors a compact and readable edition form where he can immediately identify what content refers to a paragraph type.

Formulaire amélioré d'édition d'un contenu composé de paragraphes

This tiny alteration applied on the content edit form, and specifically on the default paragraph labels, makes it immediately more readable and understandable. It translates technical information, more oriented site builder, in a user-content information, giving him a better understanding and better comfort.

I wonder how this feature could be implemented using a contributed module (or inside the Paragraphs module itself ?), the biggest difficulty living here in the capacity of a Entity Reference Revisions field to target an infinite Paragraphs type, themselves containing a possible infinity fields. If you have an idea I'm interested?

You need a freelance Drupal ? Feel free to contact me.

Oct 19 2016
Oct 19

by Elliot Christenson on October 19, 2016 - 12:35pm

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Less Critical security release for the Webform module to fix an Access Bypass vulnerability.

When using forms with private file uploads, Webform wasn't explicitly denying access to files it managed which could allow access to be granted by other modules.

You can download the patch for Webform 6.x-3.x.

If you have a Drupal 6 site using the Webform, we recommend you update immediately! We have already deployed the patch for all of our Drupal 6 Long-Term Support clients. :-)

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on

Oct 19 2016
Oct 19

Thanks to everyone who participated in this year’s New England Drupal Camp – otherwise known as “NEDCamp.” A special thanks to those of you that attended my session and to the organizers for putting together an excellent conference!

Upon posting my presentation slides online (you can find them here:, I realized they’re not going to be terribly helpful since I am rarely inclined to condense all of my talking points into a slideshow. Therefore, I decided to write a two-part blog that will elaborate on crucial points I made during the presentation; this first post will focus on task prioritization and keeping all your projects and clients straight.

Throughout my presentation, we discussed different things to consider when trying to prioritize multiple projects. Three categories come into play for me when determining overall priority: 1) Client; 2) Tasks, and; 3) Other factors.

In this corner of our industry, we work with individual clients of varying sizes. It‘s therefore essential for us to understand each client’s priority.

There are six factors I take into account when prioritizing my client accounts:

  1. Budget. This is generally the only factor people use in prioritization.
  2. Client Deadlines. In my kick-off meetings, I always talk about deadlines with my clients. Questions such as, “Do you have any contracts ending we need to know about?” “How about internal blackout dates?” “Any launches coming up?” Asking these questions can give us a lot of insight to the client expectations as well as what is important to them. This is also a great start for a candid conversation around those dates and expectations.
  3. Growth potential. If your current project with the client is a proof of concept or one of a series of projects, prioritize it as though you have those additional projects or the proof of concept was successful.
  4. Internal or external. Internal customers are often a bit more flexible with their deadlines and priorities with respect to external customers. Learn where that priority lies, and, if the project is critical to the organization, you need to know that when prioritizing as well.
  5. Partnership potential. This is something that can often be overlooked. If a client has potential for partnership (e.g., sending you more business, being vocal on social media, participating in a case study) take that into consideration when prioritizing.
  6. Relationship. If the relationship with the client is in a bad space, I want to make sure I take that into consideration and use the project as an opportunity to turn things around.

There are four factors for determining overall priority based on tasks:

  1. Importance. We’ll talk a lot more about importance vs. urgency in the next post, but, in short, a task is important if it moves you towards the goal of your project.
  2. Urgency. A lot of people equate urgency with importance, but these are distinct things. Urgency is deadline based and is not related to the project goals.
  3. Value. When picking what task to prioritize, higher value tasks will go to the top.
  4. Effort. If a task is low effort, but still important, those will often get prioritized just to get them complete and out of the way.

Other factors I take into account when looking at my priorities are:

  • Political ramifications
  • Overall impact
  • Overall risk to the project, client, and relationship.

When taking these factors into account, I consider the whole picture when determining my priorities. Going off just one or two of these can lead to dangerous blind spots for your projects.

Another challenge to managing multiple projects is to keep everything straight when it comes to client teams, deadlines, schedules, and resources. For all aspects of your projects, nothing is going to replace hard work. Take the time to learn your projects, teams, deadlines, schedules, and resources. Putting time into this will save time in the long run as well as potential embarrassment in front of your clients.

Be specific in your notes and don’t assume you’ll know what you mean later. It’s easy to fall into the trap of “I’ll remember what I meant,” but don’t take it for granted. If you’ve switched topics and projects three times since your notes, you probably won’t remember. Aim to write your notes as if another team member will be reading them.

For your clients, don’t underestimate the power of face time and building relationships. This not only helps you keep things straight, but also essential to a successful project. While we don’t always have the luxury of traveling to client site, there are things like conferences, hangouts, and Skype to help grow these relationships.

The first thing I do when assigned a project is read the statement of work (SOW) or contract. Knowing the documented details helps me keep things straight and facilitates conversations about changes and expectations.

Schedules, deadlines, and resources often require similar different tactics. For all three of these, I recommend you keep a master calendar. Know who is working on what and when. This will avoid overscheduling as well as insight into your team members’ priorities.

There is a good chance your resources are working on more than just your projects. Get to know your team members’ priorities. Your number one project may be number three for them. Knowing these priorities can help you schedule accordingly.

In the upcoming second half of this series, we’ll discuss common pitfalls to managing multiple projects and how to best utilize your quieter times to make your busy times more manageable.

Like this story? Follow us on Facebook and share your thoughts!

Oct 19 2016
Oct 19
Nasdaq using drupal

I wanted to share the exciting news that Nasdaq Corporate Solutions has selected Acquia and Drupal 8 as the basis for its next generation Investor Relations Website Platform. About 3,000 of the largest companies in the world use Nasdaq's Corporate Solutions for their investor relations websites. This includes 78 of the Nasdaq 100 Index companies and 63% of the Fortune 500 companies.

What is an IR website? It's a website where public companies share their most sensitive and critical news and information with their shareholders, institutional investors, the media and analysts. This includes everything from financial results to regulatory filings, press releases, and other company news. Examples of IR websites include, and -- all three companies are listed on Nasdaq.

All IR websites are subject to strict compliance standards, and security and reliability are very important. Nasdaq's use of Drupal 8 is a fantastic testament for Drupal and Open Source. It will raise awareness about Drupal across financial institutions worldwide.

In their announcement, Nasdaq explained that all the publicly listed companies on Nasdaq are eligible to upgrade their sites to the next-gen model "beginning in 2017 using a variety of redesign options, all of which leverage Acquia and the Drupal 8 open source enterprise web content management (WCM) system."

It's exciting that 3,000 of the largest companies in the world, like Starbucks, Apple, Amazon, Google and Facebook, are now eligible to start using Drupal 8 for some of their most critical websites. If you want to learn more, consider attending Acquia Engage in a few weeks, as Nasdaq's CIO, Brad Peterson, will be presenting.

Oct 19 2016
Oct 19

Drupal + Hybris

Today we're thrilled about releasing our new Hybris Drupal integration module to the Drupal and Hybris communities. The module connects a Drupal site to a Hybris instance. Drupal is a leading open-source web content management platform for content, community and commerce. Hybris is a leading enterprise commerce platform.

Using this module, a Drupal site can:

  • import products as standard Drupal entities,
  • spread product attributes between both systems and keep them synced,
  • allow users to check out and manage their account all in Drupal,
  • and use Hybris as an identity provider for Drupal.

The Hybris Drupal integration module allows for a headless Hybris ecom setup with Drupal on the front end, meaning that Drupal communicates to the end user and Hybris only to Drupal. In essence, Drupal is the glass. These modules were built to scale and fully support global multilingual ecommerce experiences, and are currently in production.

The product was generously sponsored by Benefit Cosmetics, and built by Third & Grove and Benefit Cosmetics. TAG is one of the leading contributors to Drupal and is available for customizations for the Hybris Drupal integration.

Oct 19 2016
Oct 19

Webinar: International SEO + Drupal with Lingotek

If you're a digital marketer, SEO is likely one of your many priorities. Since Google released "Mobilegeddon" last April, your primary focus may have shifted to mobile optimization - but have you noticed a struggle to successfully reach and engage with prospects from across the globe? If you target multiple countries and languages, your site needs to be optimized not just for Mobile SEO, but for International SEO as well. 

Join us for a live webinar to learn more about how Mediacurrent and Lingotek can help you speak the right language to your global audience!

Date: Thursday, October 27
Time: 10:00 am PT / 1:00 pm ET

Key Takeaways of this Webinar Will Include:

  • Why International SEO is important in today's market
  • Tips for optimizing a global SEO strategy
  • Tools that can easily help you translate your site
  • And more!

Presented By:

  • Jen Slemp, Senior Digital Strategist, Mediacurrent
  • Bryan Duncan, Product Marketing, Lingotek


Registration for the webinar is now open - sign up here. We’re looking forward to an informative and engaging discussion. We hope you're able to join us!

Additional Resources
Internationalization and Drupal | Blog
Translating Success in Drupal 8: The Manhattan Associates Story | Video
Manhattan Associates Drupal 8 Redesign | Case Study
Multilingual Translation in Drupal | eBook

Oct 19 2016
Oct 19

I wrote a custom module recently where I needed to programmatically attach a field; similar to how a Body field is added to content types.

If you create a content type, via the “Content types” page, a Body field is automatically added to the content type. If you don’t need the field just delete it, but the default functionality is to have it added.

I needed this same functionality in my custom module; when an entity is created a field is programmatically attached to it.

So I reverse engineered how the Body field gets added to content types. In the Node module, the Body field is exported as and the field is attached using the node_add_body_field function.

I implemented my custom module in a similar fashion and everything worked until all the entity bundles with the custom field were deleted. When a new entity type was created you’d get a fatal error saying the custom field, which was programmatically attached, doesn’t exist.

So what happened?

Change Configuration Yaml

Drupal by default will delete a field if it’s no longer used. Now the fix for this is pretty simple.

Open the field storage yaml for a specific field, look for “”.

Persist with no field

Simply set persist_with_no_fields to TRUE in the yaml file. For example, persist_with_no_fields: true.

Let’s take the Body field as an example. If you open up the field storage yaml file,, you’ll see that the persist_with_no_fields option is set to TRUE. By default Drupal will set it to FALSE.

Don’t Forget to Import the Configuration Change

Once you’ve modified the yaml file don’t forget to import the configuration change. If you want to learn more about configuration manage in Drupal 8, check out page “Managing your site’s configuration“.


I do understand this is a fairly niche problem. But if you want fields to persist then set this option to TRUE and you’re good to go.

Oct 19 2016
Oct 19

Two weeks ago I wrote about routes and controllers in the introduction to namespaces. This week we are going to take a much closer look at routes and controllers. 

If you missed out on the namespace introduction, check it out here: A gentle introduction to namespaces.

So, what exactly is a route and a controller?

When you create a custom page in Drupal with code, you need both a route and a controller. You define the URL for the page with the route. And then you create a controller for that page. This will be responsible for building and returning the content for the page.


A route determines which code should be run to generate the response when a URI is requested. It does this by mapping a URI to a controller class and method. This defines how Drupal deals with a specific URI.

Routes are stored in a YAML file (check out last week's tutorials on YAML files) in the root of the module and use the following naming convention:

  1. modulename.routing.yml

Here is the example from the namespace tutorial:

  1. hello.content:

  2. path: '/hello'

  3. defaults:

  4. _controller: 'Drupal\hello\Controller\HelloController::content'

  5. _title: 'Hello world'

  6. requirements:

  7. _permission: 'access content'

The name of the route is hello.content. The path is /hello, which is the registered URL path. So when a user enters, this route will decide which code should be run to generate a response.

We then have two default configurations specified, the controller and the title.

The controller tells Drupal which method to call when someone goes to the URL for the page (which is defined in the route).

Let’s take a closer look at the path included in _controller.

  1. 'Drupal\hello\Controller\FirstController::content’

This comprises of a namespaces class, a double colon and then the method to call.

Drupal route with a namespace, class and method

The title is the default page title. So when a user goes to /hello, the page title will be ‘Hello world’.

Route title maps to the page title


Controllers take requests or information from the user and decide how to handle the request. For this module, the controller is responsible generating the content (the ‘Hello world’ message) and returning it for the page.

A controller that returns a simple Hello world message looks like this:

  1. <?php

  2. /**

  3.  * @file

  4.  * Contains \Drupal\hello\Controller\HelloController.

  5.  */

  6. namespace Drupal\hello\Controller;

  7. use Drupal\Core\Controller\ControllerBase;

  8. class HelloController extends ControllerBase {

  9. public function content() {

  10. '#type' => 'markup',

  11. '#markup' => t('Hello world'),

  12. );

  13. }

  14. }

The controller is returning a renderable array with “Hello world”. So if you hit /hello once all this is in place, you will get a Drupal page with a Hello world message.

Drupal 8 controller content method maps to content on the page

Drupal 8 module route and controller

The controller lives in a Controller directory within the the src directory of a module.

The controller class in the src controller directory

Putting all of this together

To see this in action, you’re going to combine this all in a new module.

  1. In the /module directory, create a directory called hello
  2. Create in the root of the hello directory
  3. Add the following to

  1. name: Hello

  2. description: An experimental module to build our first Drupal 8 module

  3. package: Custom

  4. type: module

  5. version: 1.0

  6. core: 8.x

  1. Create a directory inside the hello module directory called src, which is short for source
  2. Create a directory within src called Controller
  3. Within the Controller directory, create a file called HelloController.php
  4. Add the following to HelloController.php:

  1. <?php

  2. /**

  3. * @file

  4. * Contains \Drupal\hello\Controller\HelloController.

  5. */

  6. namespace Drupal\hello\Controller;

  7. use Drupal\Core\Controller\ControllerBase;

  8. class HelloController extends ControllerBase {

  9. public function content() {

  10. return array(

  11. '#type' => 'markup',

  12. '#markup' => t('Hello world'),

  13. );

  14. }

  15. }

  1. Create a file called hello.routing.yml in the root of the hello module directory
  2. Add the following code to hello.routing.yml:

  1. hello.content:

  2. path: '/hello'

  3. defaults:

  4. _controller: 'Drupal\hello\Controller\HelloController::content'

  5. _title: 'Hello world'

  6. requirements:

  7. _permission: 'access content'

  1. Enable the hello module
  2. Visit /hello to see the Hello world message

It is good practice to complete these steps manually when you are first learning the concepts in Drupal 8 development, but you can save time by using the Drupal Console to create most of the code in the above steps. To find out how, check out my 7 day email course on the Drupal Console.

Oct 19 2016
Oct 19

Last week I had the opportunity to back to Colombia as para of my tour Around the Drupal world in 140+ days.

To be honest, this stop wasn't planned, during my visit to France the border control officer inform to me about a situation with my passport, I was almost done with the space for new stamps. That situation forces me to try to get a new passport as soon as possible; After checking to Colombian embassy in Costa Rica, I confirm that renew my passport in Costa Rica wasn't an option due the return time. For that reason, I have to travel to Colombia to renew my passport there.

With this un expected trip I tried to use in as many activities I could. In Drupal Side, I participated in a Drupal Meetup organized by Seed, and particularly by Aldibier Morales. They rent and space and organize the event to enable me to talk about Drupal Console and Drupal Community in general, I enjoy the Q & A session, where I could provide some points of view I have about how to handle local communities.

With my new passport on my hands, I start a marathon to visit my mother and father familly located in Bucaramanga, Santader. Was really good because for many years I haven't visit them.

So, as many time in my #enzotour16 I overcome the adversities and transform in something positive as much I could it.


Distance (Kilometers) San Jose, Costa Rica → Bogota, Colombia → San Jose, Costa Rica 2.576 Previously 106.561 Total 109.137


Distance (steps) Dublin 39.597 Previously 1.897.088 Total 1.936.685


Distance (Kilometers) Today 0 Previously 528 Total 528
Distance (Kilometers) Today 796 Previously 2.944 Total 3.740
Oct 18 2016
Oct 18

We are thrilled when clients seek us out to create beautifully designed and highly functional websites. However, we often find ourselves spending a lot of time to convince clients that creating compelling content is a critical component of finding and growing a highly engaged audience. Once they are on board, they begin to realize that creating great content isn’t always easy and can take a lot of time and effort.

It is disheartening to see what can happen when a link is shared on Facebook without proper meta tags, which often results in the wrong image or description being displayed. Not only is this sloppy, but it relies on users to manually correct that information (if the particular social media platform even allows for that flexibility). Worse, your audience is unlikely to take the time to make those same adjustments when they are sharing your content. The solution to this is to leverage HTML meta tags to tell and tailor your story to each social media platform.

A Real Life Example

Imagine you show up at a dinner party. You look around and discover that you don’t know anyone there! You keep scanning the unfamiliar faces when someone bumps into you and strikes a conversation. Inevitably, the other person asks you “so, what do you do for a living?”

Such a simple question, but you probably have anywhere from 2-5 canned responses to pick from depending on the background of the person that asks. If you had an inkling that the person worked in a similar industry or knows of the company you work for, you might provide a somewhat detailed answer going into specific nuances that they might understand and appreciate. On the opposite end of the spectrum, you might overwhelm someone that has no frame of reference. Here you are more likely to give a generic answer, which is why I simply tell my mom “I work with computers” and that’s sufficient to keep the conversation going.

In short, different people get different answers based on what will best support the conversation with that person or audience. We now need to apply those same principles for your content as it is interacted with on different social media platforms.

Part 1: Creating Your Sharing Strategy

Before we discuss the mechanics, it’s important to review your goals for each social media platform. Similar to the dinner party analogy, there are a variety of factors that you need to consider when deciding on what to communicate.

While the following is an overly simplistic breakdown of LinkedIn, Facebook, and Twitter, it will reinforce the need to tailor the messaging to each platform. Typically, Linkedin is geared towards conversations relating to careers, networking, business opportunities, and etc. Facebook started primarily as a way to connect with friends but has rapidly evolved into a mix that depends greatly on how you’ve been using it and what types of connections you already have. Twitter is based on real-time communication and information sharing.

It’s possible to create a piece of content that is useful to share across all 3 platforms at the same time, but it’s unlikely that it’s appropriate to present the content in the same way due to the different audiences and expectations on each platform. In short, posting an adorable picture of my two and a half-year-old daughter is encouraged on Facebook, arguably not something I would do on Twitter, and something I would never, ever do on LinkedIn.

Part 2: The Mechanics

If you’ve never built a website before, it’s unlikely that you’ve peaked under the hood to see the guts of an HTML page. If you’d like to get a very quick overview, here is a basic tutorial to get started. And if you really want to take a deep dive, here’s a comprehensive overview on Wikipedia.

Traditionally, when you share a link to the social media outlets, each platform does its best job to scrape the page’s HTML and try to extract the pertinent details. In general, getting the title of the page is a no-brainer, but the description can be messed up in a variety of ways. Sometimes it will pull the wrong description or give up and show nothing at all. Twitter’s character limit can also cause a truncation that can either change the meaning or not accurately describe the page.

Picking an appropriate image can often feel like a game of Russian roulette. While you may expect the platform to pull an image from within the article, it will sometimes pull an image from a completely unrelated part of the page. Unfortunately, this can not only harm the quality of the share result but sometimes the two juxtaposition can result in an unappealing or inappropriate result. Just imagine an article on depression pulling an image of a specific person’s picture and you may be setting the reader up with the wrong expectation.

This is where meta tags come in. Here we want to provide additional information to essentially tell the different platforms the specific information and media we want it to use instead of having them guess. For example, Facebook uses a protocol called Open Graph, which contains a specific subset of meta tags that live in the head region of the HTML page that is used for share results on their platform. Twitter has its own set of meta  tags that not only let you specify different information, but they also allow you to specify which style of share you would like to use (photo, video, standard, etc). And finally, Linkedin leverages the Facebook OpenGraph meta tags, so you can’t quite delineate the message between the two. Although, it is probably only a matter time before Linkedin creates their own version such that you can specify and tailor it to be a different introduction than Facebook.

Here’s a sample of the 9 lines of code we’ll be using to target our message.

<meta name="title" content="Page Title for Search Engines Results | Website Name" />
<meta name="description" content="Page Description for Search Engine Reults" />
<meta property="og:title" content="Page Title for Facebook" />
<meta property="og:description" content="Page Description for Facebook" />
<meta property="og:image" content="" />
<meta property="twitter:card" content="summary" />
<meta property="twitter:description" content="Page Description for Twitter." />
<meta property="twitter:title" content="Page Title for Twitter" />
<meta property="twitter:image" content="" /><

If you’re using a CMS like Drupal 8, adding these values to any node page is very straightforward. Simply install the metatag module along with the OpenGraph and Twitter submodules. Then visit the configuration menu located at /admin/config/search/metatag. There you can specify the default global values and tokens for each of these meta tag options. You can then override these values on a per node basis by visiting the node edit screen and navigating to the meta tags section.

It’s worth noting that there are more meta tags available that can be used to specify content attributes ranging from a video URL to its  geographical location (i.e. longitude and latitude). Here’s just one list to emphasize the quantity and variety of tags available.

Part 3: The Outcome

Let’s use a real example from the newmedia blog. Several months ago, I published an article titled PCI Compliance & Drupal Commerce: Which Payment Gateway Should I Choose? Given the technical nature of the article for both developers and businesses, LinkedIn and Twitter would be the most appropriate place to share this page.

Without any metadata specified, the share result on Twitter is fairly bland. Here’s a quick screenshot showing it.

A tweet from a URL that contains no meta tags.

Figure 1: A tweet containing a link that lacks HTML Meta Tags for Twitter.

Alternatively, I can provide a very tailored title, image, and description that is far more likely to get someone’s attention and provide them with an accurate representation of the article itself.

A tweet from a URL that contains meta tags.

Figure 2: A tweet containing a link with HTML Meta Tags for Twitter.

It’s important to note that these changes are more than just simple cosmetics. The share result is now taller because the images get more real estate and you’re able to leverage the full length of the description field. Both of these changes makes the tweet stand out more. If I had a video, I could put that in as well, allowing me to have an additional opportunity to communicate with a potential reader before they left Twitter. In short, I’ve dramatically improved my odds of reaching the audience.

Finally, we need to change the message on LinkedIn. By editing the "og:description" field, we could say something along the lines of "Keep your customer's credit card data safe while meeting your PCI compliance obligations for your Drupal Commerce site by using these recommended payment gateways." This way we are focusing more on a businesses need to protect their customers by choosing the appropriate gateways instead of appealing strictly to a security minded developer on Twitter.


As social media platforms continue to become a dominant traffic source for your content, it becomes even more important to put in the small amount of additional effort to ensure your story can connect with each audience. Through the proper use of metadata, you can easily achieve this goal with your existing content as well as make this a part of your editorial process moving forward.

PS. You can use the following preview tools to test what content will look like on Twitter and Facebook.

Oct 18 2016
Oct 18

In the fall of 2016, the Rainforest Alliance and Last Call Media launched an exciting redesign of, built on Drupal 8 and employing impeccable agile software development methodologies.  Our productive partnership with the Rainforest Alliance resulted in a technically groundbreaking site that allowed users unprecedented access to the riches of their content after just four months of development.  The tool is now primed to drive the Rainforest Alliance’s critical end-of-year development activities. 

Our relationship with the Rainforest Alliance began in August of 2013 when LCM undertook a massive Drupal 6 to Drupal 7 upgrade.  We enjoy a strong relationship with the Rainforest Alliance team, working together to continuously deliver strategic value in their digital properties, and were proud to be chosen for a full site redesign and upgrade.

The lobby at RA Headquarters in NYC.The lobby at RA Headquarters in NYC.

Over the years, RA has cultivated a repository of structured content to support their mission.  While the content is primarily displayed as long form text, there is a wide variety of metadata and assets associated with each piece of content.  One of the primary goals of the new site was to enable discovery of new content on the site through automatic selection of related content driven by the metadata of the content the user was viewing.  Additionally, RA had a future requirement for advanced permissioning and publishing workflows to enable stakeholders outside of the web team to play a role in the content lifecycle.  

After some initial consideration, the Rainforest Alliance and Last Call Media decided to build a responsive Drupal 8 website, which included both building out new content types and migrating existing content from their then-current Drupal 7 site. It needed to launch on a 4 month timeline, by the end of September 2016.

Drupal 8 was selected for this project based on several factors.  First, its focus on structured data fit well with Rainforest Alliance’s need for portable and searchable content.  Second, the deep integrations with Apache Solr allowed for a nuanced content relation engine.  Solr was also used to power the various search interfaces.  Third, Drupal has historically had powerful workflow tools for managing content.  While these tools weren’t quite ready for Drupal 8 when we built it, we knew they would be simple to integrate when they were ready.  In short, Drupal was a perfect fit for the immediate needs, and Drupal 8 met the organization’s longer term goals.

To meet the 4 month timeline, the project followed a highly collaborative, agile project management style (Scrum) such that RA could provide wireframes, design and UX direction, technical specifications, user testing and QA for all content types, and LCM would carry out all Drupal Development, including theming, and project management, providing guidance based on our expertise and best practices. 

Furthermore not all requirements were known at the outset and many things were known to potentially need to go a different direction depending on some of the outcomes along the way. Agile methodologies avoid the “big reveal” better than other styles of project management. Issues requiring a change in direction are raised in near realtime, resulting in saved time, in the long run, by better avoiding going in the wrong directions.

The above photo is from our Project Sizing and Sprint Forecasting 2 day workshop, on-site at RA Headquarters in NYC.
The above photo is from our Project Sizing and Sprint Forecasting 2 day workshop, on-site at RA Headquarters in NYC.

Leading up to Sprint 1, for a period of 4 weeks, I worked as a Product Owner (PO) and Agile Coach, with the Rainforest Alliance (RA) webteam in a Business Owner (BO) role, and Rob Bayliss, as a Subject Matter Expert (SME) from Last Call Media (LCM), to build and groom the initial backlog. During this time, we coached RA on Agile/Scrum and being effective in the Business Owner role. 

Together, meeting twice per week, we groomed and refined the project backlog to the point where SME guesstimated sizing seemed reasonable for forecasting purposes. We did this sizing together too with each epic guesstimated for size by Rob, on note cards. We went through each of these note cards together with RA in planning poker style, where everyone guessed at how big they thought the epic was and then, revealing Rob’s guesstimate, we compared the group’s sizing differences. It didn’t take everyone too long to pick up on Rob’s style of sizing, but many misconceptions and misunderstandings were revealed when someone’s sizing guess was wildly off from someone else’s. This exercise fine tuned our alignment on the backlog items and allowed for more accurate forecast sizing. 

Based on the sizing exercise, we set to strategically sorting our epic note cards into sprint piles. The strategy was to group cards into earlier sprints piles based not only on the importance of the epic but also on how helpful it would be to have that epic in place for later sprints. A grouping was considered full when it reached resource capacity, which was determined by adding the sizing from each note card up to an agreed upon threshold. It turned out we were able to forecast 7 sprints worth of backlog items, with approximately 2 more sprints worth of “nice-to-haves” as well as some epic cards determined to be no longer needed.

The result was each sprint’s goal being forecasted to better enable following sprint goals and concurrent releases for user testing, feedback, and iteration. Additionally, as the project budget and timeline only allowed for 6 sprints, our forecast safely assumed for a backlog of certain items being left undone at the final public release after the 6th sprint. This concept of moving items to a backlog for after sprint 6 would become a critical one later in the project as complexities were uncovered, new directions emerged, and priorities changed.

6 Sprint Forecast

The above spreadsheet shows a forecast of 6 Sprints worth of guesstimated epics, The concept of a forecast was an important one. Just like with the weather forecast, the further out it goes, the less it is to be expected to be accurate. The sprint forecast further became a living document for maintaining an evolving project vision across all of its iterations.

With each sprint, we made three planning ceremonies available to the project. Pre-sprint grooming was a group exercise for the development team to go over the upcoming sprint’s wish list for the purposes of optimizing the official Sprint Planning meeting. Sprint Planning was held on the first day of a sprint and followed traditional Scrum guidelines. In Re-Forecasting, the team gave feedback from the trenches on the original forecast of SME guesstimates. This enabled the opportunity to adjust the forecast to be more realistic, evolving the project vision as it was adapted to these reports. Additionally, in later sprints, we began doing mid-sprint releases of completed work to be previewed. We did this to enable better planning and re-prioritization from the RA team.

The image above shows a typical daily standup meeting from the project.
The image above shows a typical daily standup meeting from the project.

For openness, I kept a Product Owner journal shared with the RA team. In it I tracked the daily standups, their sticking points and resolutions, as well as each day’s User Stories that were completed and the work-unit points that I awarded. This last piece of daily info was used to keep a realtime project build up chart.

Agile doesn’t tell us not to have a plan, but to always be planning.

A plan requires an awareness of things to consider for planning. Agility is the concept that we need to be ready to adapt our plans over time as we gain additional awareness. To that end, all of Scrum, from its Values to its Ceremonies, is designed to increase awareness, enabling better adaptation to change. The following build up chart is a Scrum artifact from this project that was used for the purpose of increasing awareness and better planning.

RA 6 Sprint Project Build Up Chart

The Build Up Chart additionally serves well for telling this project’s story across its six sprints. For example, one can tell things about the project just from this chart. The sharp upticks toward the end of each sprint are indicative of a new build, where substantially complex functionality is being attempted each sprint resulting in shippable increments on most stories not being realized until the very end or their sprint. One can also tell that this project was re-forecasted at least three times, resulting in the adjustments to the estimated project size over time.

Sprints 1 and 2 stayed on track nicely, getting most of the critical core functionality in place. Some key Drupal 8 modules implemented during these sprints included Page Manager, Layout Plugin, Panels, as well as Search API/Search API Solr, Media Entity (and related modules), Entity Embed, Entity Browser, Inline Entity form, and Features. Core functionality is described below in the context of each of these Drupal modules.

Page Manager/Layout Plugin/Panels:
Page manager is a great tool for making it easy to create specialized landing pages, and when combined with Layout Plugin and Panels it provides the ability to use different layouts when viewing different node types (or other entity/bundle combination). Specialized landing pages were built as specialized page manager pages, many with their own layouts. All of the different full node displays were handled by Page Manager, using different variants for each node type.

Search API/Search API Solr:
Most content types have a “related content” section at the bottom of the page. Tagging content is one great way to handle something like this, but for our requirements we needed to have logic that was more robust than only showing other content with the same taxonomy terms. We went with Solr for this, specifically for the “more like this” (MLT) functionality that it provides. Search API Solr provided the interface for managing our servers and indexes, then with a custom block we were able to leverage MLT with our own boost criteria to help control how the related content lists were generated.

Media Entity (and related modules):
Drupal core provides file fields, which allow us to upload files to different entities, but this project had a requirement that we must be able to reuse the uploaded files, and have the ability to add additional related information for each file or image that is uploaded. Things like caption, image source, etc. On top of that, we needed to be able to display these files in different ways - in some places an image may display the caption as a tool tip, while in others it should display below the image. The Media suite of modules is perfect for this type of thing. We were able to use different modules from within the media ecosystem to handle images, embedded videos, and PDF documents, and add appropriate fields to each media entity bundle, and using Drupal core’s view mode system we were able to set up multiple displays for each media type.

Entity Embed / Entity Browser/Inline Entity form:
It hasn’t always been easy to empower content teams to easily add images and other entities to WYSIWYG fields, especially when those items need to be themed in a special way. Entity Embed allowed us to add new CKEditor buttons to the WYSIWYG that provide a dialog where the user can choose an entity that they want to appear in content, the view mode that they want it to display with, and then position it on either the left, center, or right side. One great thing about this is that the module uses a text format filter, so different text formats can display the embedded entities, while the others don’t. Entity Embed is primarily used for embedding images, but we also used it to give content editors the ability to embed blocks in their wysiwyg content as well.

Inline Entity Form allowed us to create entity reference fields, but gave content editors the ability to create the referenced entities right from the node edit form, something that can be a big time saver for content editors.

Entity Browser ties in with both Entity Embed and IEF by adding a button that opens a dialog displaying a view that allows users to select the entity that they want to use from a list, rather than having to remember media names, taxonomy names, or node titles and enter them into an autocomplete field.

These modules combined help make for a great editorial experience.

Drupal 8’s CMI initiative solved a lot of issues around managing configuration. That being said, we’ve found that bulk exporting/importing an entire site’s configuration isn’t a great workflow for our team that involves multiple environments and developers, each potentially having a few of their own special configuration options that need to be be set. Manually seeking out and overriding those configuration options in settings.php isn’t something that we decided was sustainable, and has it’s own drawbacks. The features module in D8 allows us to package and ship the configuration that we need to be consistent, while allowing us to leave out what may be different across environments (such as development only modules, css/js aggregation and page caching).

Even though things were on track nicely across the first two sprints,  we could see work in progress and technical debt catching up with us during sprint 2. The first two sprints laid a tremendous amount of ground work, but also left as much technical debt. This resulted in a pivotal moment in the build chart, shown up close below, with progress dipping below target.

Sprint 3 shorfall

Notes from Sprint 3’s Planning Meeting:

“This was forecasted as a really full sprint, but the team decided to only accept the stories above to make sure that there would be time to completely wrap up. The team is trying to be mindful of the amount of WIP tasks that already exist, and avoid creating more of those loose ends.”

The Homepage was completed in Sprint 3. The above image is of the final homepage.The Homepage was completed in Sprint 3. The above image is of the final homepage.

Fewer Stories were committed to, meaning fewer work-unit points earned, resulting in a below target buildup, but the silver lining was that the results were beyond excellent with far less technical debt left over to be carried into future sprints.

Sprint 4 Build up

For Sprint 4, we reconfigured sprint priorities with RA and reassessed story sizing. This Reconfigured Sprint forecasting resulted in a batch of stories different than originally forecasted. It also resulted in the Estimated project size increasing. By the end of the this sprint, we worked with RA to move tasks out of the project. This resulted in the Estimated Project size decreasing, putting the project on target with the build team’s velocity.

The team’s continued focus on careful commitment paid off big in this delivery in terms of comprehensive task completion, as well as delivering on a stretch goal, which nudged them just above target, leaving them in a great place for the next sprint. In addition to further iterating on the groundwork from the previous sprints, Sprint 4, by way of Drupal’s Panelizer,  brought with it Landing Pages and Content Hubs.

Landing Pages and Content Hubs

When viewing taxonomy terms for several vocabularies we needed the ability to have a consistent layout, but to place different Custom Block entities on each term, which is essentially what Panelizer’s made for. The module doesn’t have full taxonomy term support yet, but the community is working to get it added, and the patches provided in this issue were far enough along that we were able to make it work without issue.

Editor experience

We were technically crushing it, but Sprint 3 was the beginning of a turning point that comes in every project worth discussing. We didn’t know it at the time but Sprint 3 was the transition point from Certainty to Doubt in this project’s Emotional Cycle.

emotional lifecycle

The idea is that first there is a honeymoon period of Optimism and Certainty, but inevitably things don’t always work out as expected and Doubt creeps in, pulling the team into a Pessimistic state. Good teams identify and adapt to this shift, moving from Doubt to Hope quickly with minimal emotional damage. With every project, I’ve tended to look to preempt the slip from Certainty to Doubt, somehow looking to skip to Hope or even somehow to Satisfaction. Reality, however, can often be too elusive until it smacks you right in the face. I’ve come to accept that traveling the path through Doubt, into the pessimism, is simply the sign that a project is attempting to do at least as much as it should. Doubt comes from the realization that not everything imaginable is realistic to expect. 

How do you know if you are doing as much as you could if you never run out of time to do more? 

Pessimism is a part of the process of grieving the loss of things hoped for that now seem unrealistic. The uptick toward Hope comes from acceptance and adapting expectations to reality. 

The following overlaying of the two graphs, Build Up and Emotional Cycle, show their relation visually.

overlaying of the two graphs, Build Up and Emotional Cycle

Reality will always win, but are you really on its team?

The ascension to Hope, on this project, can be attributed to an understanding of and a dedication to the five Scrum Values (Openness, Focus, Commitment, Respect, and Courage), followed by a more rapid iteration strategy, with frequent mini-releases during Sprint 6. Since most core functionality was solidly in place by this time, it became possible to squash larger numbers of site-wide bugs by relating them to the current sprint’s stories. This resulted in those stories shipping as more highly polished than was possible in previous sprints, while at the same time further iterating on stories from past efforts. Also, to increase awareness, and thereby project agility, during the final sprint, all Accepted Backlog Items were released and reviewed, as they were completed. The result was a highly collaborative finishing of the final shippable increment. At the close of Sprint 6, there were zero critical and only 3 moderate issues. The final Sprint/Project review had only 3 support questions.

RA iPad

RA mobile

The project used its remaining time until launch day running extensive QA with the LCM Continuous Delivery team making adjustments, finally launching as arguably the most impressive Drupal 8 site launched within a year of the initial release its latest major version of the Open Source CMS, and most importantly in time for Rainforest Alliance's major end-of-year donation campaign. The site delivers on its promise to showcase the Rainforest Alliance’s exciting and informative messages and beautiful imagery, and stands as testimony for the efficacy of the agile approach.

Last Call Media is a full-service creative agency developing solutions for partners online and off through innovative strategy, branding, print, and digital design. Last Call Media enjoys work with purpose– building engaging solutions that assist and support organizations working to improve their communities.

Contact us to find out about Drupal 8, Agile Coaching, and working with us!

Oct 18 2016
Oct 18

Workflow iconAt DrupalCon Dublin, I spoke about The Association’s commitment to help Drupal thrive by improving the contribution and adoption journeys through our two main community assets, DrupalCon and You can see the video here.

One area I touch on was my experience as a new code contributor. Contributing my patch was a challenging, but joyous experience and I want more people to have that feeling—and I want to make it as easy as possible for others to contribute, too. It’s critical for the health of the project.

At the heart of the Drupal contributor community are our custom development tools, including the issue tracker, Git repositories, packaging, updates server, and automated testing. We believe there are many aspects of Drupal’s development workflow that have been essential to our project's success, and our current tooling reflects and reinforces our community values of self-empowerment, collaboration, and respect, which we seek to continue to uphold.

It’s time to modernize these developer tools. To support the Association with this objective The Drupal Association created a Technical Advisory Committee (TAC). The TAC consists of community members Angie Byron, Moshe Weitzman, and Steve Francia, who is also our newest Drupal Association board member. The TAC acts in an advisory role and reports to me.

Building off of the work the community has already done, the TAC is exploring opportunities to improve the tools we use to collaborate on The crux of this exploration is determining whether we should continue to rely on and invest in our self-built tools, or whether we should partner with an organization that specializes in open source tooling.

Our hope is that we will be able to bring significant improvements to our contribution experience faster by partnering with an organization willing to learn from our community and adapt their tools to those things we do uniquely well. Such a partnership would benefit both the Drupal community—with the support of their ongoing development—and potentially the broader open source community—by allowing our partner to bring other projects those aspects of our code collaboration workflow.

The TAC will use a collaborative process, working with staff and community to make a final recommendation. The TAC has already begun the process and has some very positive exploratory conversations. The TAC and staff will be communicating their progress with the community in upcoming blog posts.  

Oct 18 2016
Oct 18

In 2006 I was fresh into Drupal, a few months earlier I build my first Drupal 4.7 site - and the webdesigner shop I was a co-owened of, the legendary "Noget med ILD”(Something with FIRE) We did have our own homegrown cms (molotov), but I was looking for a system to replace it - wanted to go back to do more design, less php coding & I really didn’t like  writing sql or working for hours to optimise some query, my love was still after 10 years in the industry html + css that was my design language.

I was really eager to learn more about Drupal, but more i wanted to know how the Drupal people was. I had touched a few other opensource projects,but one thing that turned me away from a lot of them was the “Nobody but us understand how to build websites” attitude. A important thing for me attending DrupalCon brussels was to figure out how the community was - if they were cool, i  was pretty sure I would begin to use  this Drupal thing as my main tool for building websites in the future.

Drupal & macbooks

What I first learned at the 180 ppl drupalcon was an extremely welcoming bunch of nerds & geeks. A thing that stroke me that almost everybody was on a mac. Remember i was coming from the design world and back then, it wasn’t normal to have a mac. So many designers here, great now maybe I can find a few fellow minded & we can look into fixing the markup! 20 minutes later i understood the reason for mac's was it had the terminal + a mac could play movies without you having to compile your own driver etc. Im still sure I was the only one at the event that used photoshop for other things than just cropping images.

After 2 days of DrupalCon there was a few A few things that made a profound impression on me, and i still think is the reason I stuck around for the next 10 years: 

Oct 18 2016
Oct 18
Drupal 8 Private Online Development

An OSTraining member asked how you could work online and stop bots from crawling a site in development.

We are going to use 3 modules to secure our drupal 8 enviroment so we can work online in private. 

Here's how to get started:

  • Download and enable Robotstxt 
  • Select configure from the extend menu or you can configure Robotstxt from the search and metadata section.


The configuration gives us direct access to the Robotstxt file and we can edit it here. If you want to disallow any crawling (i.e. deny access to googlebot), we can either hash out the lines or remove them entirely. I have removed them for this example. 


Now select the link at the top of the configuration page to take you to the robotstxt file to verify that the changes have been saved. If the file does not match what you have saved then you do not have suffient permissions for drupal to edit the original file. If this happens you will have to manually delete or rename the original file so that the new one can be saved in it's place. Now that we have configured and confirmed that the robotstxt file is working correctly we can even verify with google this requires Google Account Click here 

Now that we have configured robotstxt we will setup Require Login.

  • Download and enable Require Login Select configure from the extend menu.


Here we can set a message exlcude some paths and even change the anonymous path. For the purpose of securing the site we don't need to set any of these so I am just going to save the configuration and open another window and test that it does indeed force to the sign in page.

verify require 4

You should note that any links that lead to resources not contained within your site will still be clickable if you really want to prevent any thing from being visabile you will have to check the block display so that the login is all that loads.

Now that we have configured Require Login we will setup Shield.

  • Download and enable Shield Select configure from the extend menu. This module requires apache to be installed on the server.


From here you can even block access via commandline should you wish to disable command line access untick this box.

Select a user name and password for access. 

Save the configuration.



About the author

Daniel is a web designer from UK, who's a friendly and helpful part of the support team here at OSTraining.

View the discussion thread.

Oct 18 2016
Oct 18

Maybe you have seen Dries Buytaert's DrupalCon keynote and are looking forward to all the goodies coming in future Drupal 8 versions. The truth is none of those things will happen without people who want to make them happen to solve their own challenges with implementing Drupal solutions. Are you implementing decoupled solutions and have issues you are working on? In the middle of building up a suite of integrated media solutions? These core team meetings are ideal to bring in these issues and discuss solutions and to be part of shaping up where Drupal 8 is heading. Read on for details.

  1. The JSON API and GraphQL meeting is held every week at 2pm UTC in the #drupal-wscii IRC channel.
  2. There is a monthly meeting on all API first work (REST, Waterwheel, JSON API, GraphQL) every third Monday of the month at 11am UTC in Google Hangouts.
  3. The Panels ecosystem meeting is on every Tuesday at 10am UTC in the #drupal-scotch IRC channel.
  4. There are two usability meetings every week! One at 7pm UTC on Tuesday while the other is at 7am UTC on Wednesday. Pick the best for your timezone. The meetings are at, get an invite at
  5. There is a default content in core meeting every other week on Tuesdays at 9pm UTC, and at 3pm UTC on the opposite weeks. Pick the best for your timezone. The meetings are at, get an invite at
  6. There are two theme component library meetings on Wednesday, one at 1pm UTC and one at 5pm UTC Pick the best for your timezone. The meetings are at, get an invite at
  7. There is a media meeting every Wednesday at 2pm UTC, join in the #drupal-media IRC channel.
  8. The multilingual meetings are still going on for years at 4pm UTC on Wednesdays, join at #drupal-i18n in IRC.
  9. The workflow initiative meets every Thursday at noon UTC in #drupal-contribute on IRC.
  10. Wanna help with migrate? The team has Google Hangouts at Wed 1pm UTC or Thu 1am UTC on an alternating weekly schedule. The #drupal-migrate IRC channel is also used as a backchannel.
  11. Last but not least the new user facing core theme initiative meets every other Thursday at 3pm UTC in, get an invite at

Below is the calendar of all the meetings, subscribe to the Ical feed at

Oct 18 2016
Oct 18

Maybe you have seen Dries Buytaert's DrupalCon keynote and are looking forward to all the goodies coming in future Drupal 8 versions. The truth is none of those things will happen without people who want to make them happen to solve their own challenges with implementing Drupal solutions. Are you implementing decoupled solutions and have issues you are working on? In the middle of building up a suite of integrated media solutions? These core team meetings are ideal to bring in these issues and discuss solutions and to be part of shaping up where Drupal 8 is heading. Read on for details.

  1. The JSON API and GraphQL meeting is held every week at 2pm UTC in the #drupal-wscii IRC channel.
  2. There is a monthly meeting on all API first work (REST, Waterwheel, JSON API, GraphQL) every third Monday of the month at 11am UTC in Google Hangouts.
  3. The Panels ecosystem meeting is on every Tuesday at 10am UTC in the #drupal-scotch IRC channel.
  4. There are two usability meetings every week! One at 7pm UTC on Tuesday while the other is at 7am UTC on Wednesday. Pick the best for your timezone. The meetings are at, get an invite at
  5. There is a default content in core meeting every other week on Tuesdays at 9pm UTC, and at 3pm UTC on the opposite weeks. Pick the best for your timezone. The meetings are at, get an invite at
  6. There are two theme component library meetings on Wednesday, one at 1pm UTC and one at 5pm UTC Pick the best for your timezone. The meetings are at, get an invite at
  7. There is a media meeting every Wednesday at 2pm UTC, join in the #drupal-media IRC channel.
  8. The multilingual meetings are still going on for years at 4pm UTC on Wednesdays, join at #drupal-i18n in IRC.
  9. The workflow initiative meets every Thursday at noon UTC in #drupal-contribute on IRC.
  10. Wanna help with migrate? The team has Google Hangouts at Wed 1pm UTC or Thu 1am UTC on an alternating weekly schedule. The #drupal-migrate IRC channel is also used as a backchannel.
  11. Last but not least the new user facing core theme initiative meets every other Thursday at 3pm UTC in, get an invite at

Below is the calendar of all the meetings, subscribe to the Ical feed at

Oct 18 2016
Oct 18

It’s a joy to see how Drupal 8 fulfills its promise of becoming more convenient in every way! We’ve discussed how it respects web accessibility standards, facilitates product management with the Inline Entity Form module, uses the BigPipe module’s power to make sites load faster, and much more. Great improvements have also touched the configuration management issue. This will be today’s topic for discussion.

Configuration: a glimpse at what it is

Configuration includes any elements on your site that are configurable other than your content. Among them are: module and theme settings, display modes, widget and block placement, navigation menus, content types, taxonomy vocabularies and many more. You realize just how many there when you need to get something changed around :) For example, when you need to move your configuration changes between your site’s environments (dev, test, and live).

A new handy configuration management system in Drupal 8

In Drupal 7, sharing your configuration to your site’s versions used to be a daunting task. So, one of the main initiatives for improvements in Drupal 8 was to create a convenient configuration management system.

Now it’s done! The Configuration Management Initiative (CMI) has been successful. Congrats to all of us, because we now have a functional configuration management system built into the Drupal core that lets us store, modify and move our configuration data quickly and easily.

Manage your configuration in files

By default, your configuration settings are stored in the database. However, Drupal 8 offers the possibility to convert them to code and store them in files instead. It’s as simple as that: export your configuration from database to files, and, when you deploy it, just import it to the database of the new environment. For example, you can make some changes on your dev site, export them to files, deploy on your test site and import to the test site database, and so on.

For managing your configuration in files, the Configuration Manager module or the Drush command-line tool could be very useful to you.

Using files for configuration management is really time-saving. You no longer have to remember what has changed, say, on your dev site, and repeat these changes on your test site and your live site. Moreover, the process can be made automated. It’s also a really good idea to make it version-controlled.

Working with the same website and different websites

The new configuration management system is designed to facilitate moving your configuration from one version of your site to another, but NOT between different sites. So please keep in mind that the UUID values of the source and the destination sites should match. Only then can the configuration management work.

However, nothing is impossible in Drupal, and if you wish to migrate configurations from one site to another, you could try Drupal 8 Features Module — which, by the way, offers some extra flexibility that the CMI does not yet have.

By the way, an in-depth overview of configuration in Drupal 8 with lots of technical details was done about a year ago by our Drupal developer. Check out!

Have questions, need assistance, want a new website? We are always glad to hear from you and configure our efforts to exactly fit your needs ;)

Oct 18 2016
Oct 18

Years ago there was It had instant search, module reviews and an always up to date feed with the newest Drupal modules. It was wonderful. I was subscribed to their RSS feed with the latest modules because I felt like I was ahead of the curve for knowing all the latest contrib additions for Drupal, I learned about a lot of useful little (and big) modules that way.

Now is no more, there is just a ghost of a website. Ironically one of it's latest news articles headlines "Where did we go for 5 days". That was almost 5 years ago. In the meanwhile the Drupal Reddit community has grown to 6000 subscribers. I frequently visit /r/Drupal, it's a great resource for industry news. It is however not a replacement for

Presenting /r/Drupal_modules and /r/Drupal_Themes

That's why I created /r/Drupal_modules and /r/Drupal_Themes. I was playing with the idea for a while already, and then on /r/Drupal I saw I was not the only one looking for a feed with new Drupal modules. I went ahead and developed a python script to scrape newly added modules and themes from and post them to  /r/Drupal_modules and /r/Drupal_Themes. Oh and there is also /r/Drupal_Distributions, almost forgot about that. Interestingly, I now see that very recently there is more new projects in the Distributions feed then in the themes feed.

The idea is that these subreddits are used only to list Drupal projects. I set this up about a month ago to just see if the system works OK so far it looks good. Now I want to invite everyone to use the comments for module/theme reviews, use upvotes and downvotes to rate your favorite projects, and last but not least: submit links to new projects that are not on As we don't currently have a central respository of Drupal modules that are hosted on github and elsewhere, we can make it happen on reddit.

And a last note: I didn't scrape all existing projects on I think a lot of d.o. projects are not maintained anymore, possibly made obsolete by newer modules, and there are just too many to scrape without having to worry about performance and politeness. Feel free to submit any modules you think are useful to have in the subreddit listings. 

Oct 18 2016
Oct 18

Mobile apps are known to handle offline world very good with their native apps capability, which can cache basically all the assets whereas web applications are falling behind, losing this battle. A lot of us cannot afford to create a dedicated mobile app. Therefore, we need to find some solution with what we already have.

That solution is Application Cache or AppCache in short. Of course, as any other software, it is not perfect and it has its gotchas. Moreover, for some time now it has been marked as deprecated, but basically no other real alternative exists. Actually, there is an alternative, which will eventually take AppCache’s place, but unfortunately it is not yet supported in all major browsers. It is called ServiceWorker. It has some neat features such as push notifications, but more about it in one of our next posts.

AppCache allows you, as a developer, to select which files a browser should cache, so they are available for the offline users. This means that your web page will work even if a user loses internet connection and reloads the web page.



Cache manifest file

Cache manifest file is a simple text file, which serves as a config file, where all resources, which the browser should cache for offline use, are listed.

To enable AppCache on any site, you need to include mentioned manifest file as an attribute to your HTML tag like this:

<html manifest="manifest-example.appcache">

Of course this attribute must be included on every page, which you want to be cached. Pages in your manifest file will also be cached, even if they do not have manifest attribute.

Please note that AppCache does not distinguish between GET variables and URL path, which means that “/some-url/?foo” and “/some-url/?bar” are from AppCache’s stand of view two different resources and will both be cached. So, it is best to put only one resource for AppCache to cache.

There’s a neat feature in Chrome, which allows you to manage browser’s cache, by visiting chrome://appcache-internals/.

Please note that manifest file must be served as text/cache-manifest mime-type, which means that you might want to update your Apache2 or Nginx to return correct mime for .appcache extensions.

For Apache2 simply add following line to your config:

AddType text/cache-manifest .appcache

Structure of manifest file

Structure of the file is quite simple. Here is one example, which I shall explain below.

# 2010-06-18:v3

# Explicitly cached entries

# offline.html will be displayed if the user is offline
/ /offline.html

# All other resources (e.g. sites) require the user to be online. 

# Additional resources to cache

First line CACHE MANIFEST is mandatory.

As you can see, # are comments. If you want to force the browser to re-cache the files, you also need to make a change in your manifest file. You can do this by changing a comment, hence the second line, which is actually some sort of cache version.

Entries after the mandatory line are URLs which browser will cache.

FALLBACK are basically some sort of redirects. Following line:

/ /offline.html

will redirect user to /offline.html when he will try to reach / while offline.

With NETWORK directive you specify, which URLs are white-listed resources that require a connection to the server. All this URLs bypass the cache, even if the user is offline. Usually you need to put “*”, meaning all other URLs will require a network.

At the end you can see CACHE directive. Those are just more resources, which we want to cache.

NOTE: cache can be updated in one of the following ways

  1. User clears the cache manually from the browser
  2. Manifest file is altered. If one of the resources, which is listed in manifest, is changed, the browser will NOT re-cache it. Manifest itself must be changed to force a re-caching procedure.

Drupal 8

When reading this post, you must have been wondering, why does the title mention Drupal and all this guy is writing is some technical gibberish. Well… It is good to know how things work under the hood sometimes, even though you probably will never need to use this knowledge when using Drupal module for AppCache. And to be fair, I kind of told you to skip the first part haven’t I?

So, let us move on to Drupal. But first let me explain, how AppCache for Drupal is designed:

  1. Manifest attribute is added to pages, which are completely separated from the online pages, even if they serve the same content. This means you need to make two versions of it.
  2. Those offline pages have to have different route than the original, for example node/1 becomes offline/node/1 and should use different template.
  3. Manifest file should also contain assets, such as javascript, styles, logos and images

Let’s download and install the offline_app module for Drupal 8 using Drupal Console, which implements the use of AppCache by this two commands:

# drupal module:download offline_app
# drupal module:install offline_app

Now, let’s make some content available offline, shall we?

I have prepared a clean Drupal 8 installation for this purpose, so let’s go ahead and generate some random Articles by using Devel module by visiting following URL (do not forget to enable Devel generate as well):


If you do not have Devel module, you can get it and enable it by using Drupal Console like this:

drupal module:download devel
drupal module:install devel
drupal module:install devel_generate

Since we want to show our Articles, we make a simple Page View and name it “articles”. We select to display Teaser mode and Path /articles. The View should now look like this:

Articles in AppCache

If you save the view and visit /articles, you should see all of your articles.

So, if we want this view to be available offline, we need to add new Display to this view and select offline. You can see that I have already done that on the previous image.

Once you do that, go ahead and edit Content view mode so it reads “Offline teaser”. Rest of the settings can be left alone, unless you want to add some custom filters, sorting or pagers. The view should look like this:

Articles in AppCache 2

NOTE: if you want to display Offline teaser on one display and Teaser on another, you have to first click on Content and select “For: This … (override)”. Otherwise you will be changing this settings on both displays at once.

Offline teaser

Since we decided to show Content and not fields in View, we need to go to node display page, where we decide, which fields to render. So, go ahead and visit the following page:


At the moment you probably have just 3 displays, Default, RSS and Teaser. On the bottom of this page there is a “Custom display settings” text, which you should click and select “Offline teaser” and click save.

Offline teaser 2

Now the “Offline teaser” display should be visible for us to click and edit. Now we can:

  • select Image format to Offline image and medium size
  • Body format to Trimmed to 600 chars and
  • Disable comments

Save it!

Manage display

Now go to AppCache’s configuration page, which is located here:


There are quite few settings for you to check. Let’s cover the basics

  • Manifest tab has Pages, Fallback and Network. Fallback and Network are already filled. If you want to know more, please read the beginning of this post. Pages section contain URLs, which you want to explicitly cache.
  • Content tab has Pages. This is where we will be entering our view and some nodes. Homepage and Offline messages are self explanatory. Menu is basically a menu, which you generate for your offline app. More about it later on. Images are links to all the images.
  • Assets are CSS’s and JS’s, which you want to include to your offline page.
  • And then there are Settings and Homescreen, which are not really important at the moment.

First, we go to Content tab and add our new view by writing:


Syntax is written below the textarea and it says that you should use alias/view:name:display, when referencing a view.


For the sake of testing let’s add one more node page. My first Article in the view, which we created earlier, has a NID of 26. So to add it, we enter it below the articles view, which we entered earlier, like this:


Last thing we need to set is the Menu. This menu will be shown, when a user visits /offline of our page. Therefore, we will add two items. First one will be home, which points to home page of offline page. Second one is a link to our articles view. So, this is what we need to put in:


Now, we save the configuration and visit our offline section of the page by going here:


This is what we should see:

Offline view

This welcome text can of course be adjusted to your needs.

Browser should now sync all the files in its internal cache, meanwhile you can take a look at your /manifest.appcache file which should look like this:

# Timestamp: 1467983502
/ /offline/appcache-fallback

Let’s go offline! Turn off your network connection, if you are using containers of Vagrant, just shut them down. I used docker, so what I do is:

docker stop d8

While offline, go to your website, to the root of your website (/). You should see a welcome screen like this one:

Offline view 2

This happens, because we have a FALLBACK entry in our manifest file, which tells our browser to fetch /offline/appcache-fallback if it cannot fetch (your drupal’s root page).

Now, go to the Articles. You should see all the articles. But wait, there are no images? Well, if you take a look at your manifest file again, you will see that there are no images listed and since we use images in our view, we need to also explicitly put images in the manifest.

So, let’s turn our internet connection back on, or just start your container or Vagrant box and head to AppCache configuration page. Under the Content tab click on Images, click the checkbox and then click the link below the textarea, which says “Click here to update the list of images”. Page should refresh. Go ahead and click on Images again. This time there should be links to all of the images, which our view is using to render articles. Save the configuration and then head to /manifest.appcache and refresh it few times. Those links to images should appear. If they don’t, go back to AppCache settings, to Images and copy all the text and paste it in Manifest tab, Pages section and click Save. If you check your /manifest.appcache, those links to images should appear (refresh few times). I think this is a bug, since the checkbox in Images clearly states that those images will be added to the manifest, but they are not. Anyway, as I said, there is a workaround. You can just put them in directly.

So, if you go offline and revisit the view, you should have all the images visible even though you are not connected.

Images offline

You might be seeing slightly uglier output. This is because I have added aggregated CSS files to Assets like this:

Assets offline

Since we added one article node to offline as well, you should see, if you hover over the titles from the offline view, that the first one has a link to /offline/this-is-nid-26 whereas others point to /. So, AppCache module knows that the first article is also available offline. This is why it makes a link to it. If you click on it, full node content should pop up.

AppCache Module not the final solution

This module of course is not a perfect solution, but you can achieve to publish your content offline in short period of time. There are still some bugs and some features missing, such as, automatically add nodes to the manifest, which are included in the view. One other thing to note is, if any of the resources is not available, when the browser caches resources, cache will not be updated. This is not due to this module, but nature of AppCache itself.

Oct 17 2016
Oct 17

Last week, we launched a new version of Acquia Lift, our personalization tool. Acquia Lift learns about your visitors' interests, preferences and context and uses that information to personalize and contextualize their experience. After more than a year of hard work, Acquia Lift has many new and powerful capabilities. In this post, I want to highlight some of the biggest improvements.

Intuitive user experience

To begin, Acquia Lift's new user interface is based on the outside-in principle. In the case of Acquia Lift, this means that the user interface primarily takes the form of a sidebar that can slide out from the edge of the page when needed. From there, users can drag and drop content into the page and get an instant preview of how the content would look. From the sidebar, you can also switch between different user segments to preview the site for different users. Personalization rules can be configured as A/B tests, and all rules affecting a certain area of a page can easily be visualized and prioritized in context. The new user interface is a lot more intuitive.

The settings tray in Acquia Lift

Unifying content and customer data

Having a complete view of the customer is one of the core ideas of personalization. This means being able to capture visitor profiles and behavioral data, as well as implicit interests across all channels. Acquia Lift also makes it possible to segment and target audiences in real time based on their behaviors and actions. For example, Acquia Lift can learn that someone is more interested in "tennis" than "soccer" and will use that information to serve more tennis news.

It is equally important to have a complete view of the content and experiences that you can deliver to those customers. The latest version of Acquia Lift can aggregate content from any source. This means that the Acquia Lift tray shows you content from all your sites and not just the site you're on. You can drag content from an ecommerce platform into a Drupal site and vice versa. The rendering of the content can be done inside Drupal or directly from the content's source (in this case the ecommerce platform). A central view of all your organization's content enables marketers to streamline the distribution process and deliver the most relevant content to their customers, regardless of where that content was stored originally.

Content can also be displayed in any number of ways. Just as content in Drupal can have different "display modes" (i.e. short form, long form, hero banner, sidebar image, etc), content in Acquia Lift can also be selected for the right display format in addition to the right audience. In fact, when you connect a Drupal site to Acquia Lift, you can simply configure which "entities" should be indexed inside of Acquia Lift and which "display modes" should be available, allowing you to reuse all of your existing content and configurations. Without this capability, marketers are forced to duplicate the same piece of content in different platforms and in several different formats for each use. Building a consistent experience across all channels in a personalized way then becomes incredibly difficult to manage. The new capabilities of Acquia Lift remedy this pain point.

The best for Drupal, and beyond

We've always focused on making Acquia Lift the best personalization solution for Drupal, but we realize that customers have other technology in place. The latest version of Acquia Lift can be installed on any Drupal or non-Drupal website through a simple JavaScript tag (much like the way you might install Google Analytics). So whether it's a legacy system, a JavaScript application, a decoupled Drupal build with custom front end, or a non-Drupal commerce site, they can all be personalized and connected with Acquia Lift.

In addition, we've also taken an API-first approach. The new version of Acquia Lift comes with an open API, which can be used for tracking events, retrieving user segments in real time, and showing decisions and content inside of any application. Developers can now use this capability to extend beyond the Lift UI and integrate behavioral tracking and personalization with experiences beyond the web, such as mobile applications or email.

I believe personalization and contextualization are becoming critical building blocks in the future of the web. Earlier this year I wrote that personalization is one of the most important trends in how digital experiences are being built today and will be built in the future. Tools like Acquia Lift allow organizations to better understand their customer's context and preferences so they can continue to deliver the best digital experiences. With the latest release of Acquia Lift, we've taken everything we've learned in personalization over the past several years to build a tool that is both flexible and easy to use. I'm excited to see the new Acquia Lift in the hands of our customers and partners.

Oct 17 2016
Oct 17

With the advancement of technology, there are infinite ways and opportunities to work remotely, no matter where you are. In this week’s episode of The Secret Sauce, we share some strategies for making remote work - well, work.

iTunes | RSS Feed | Download| Transcript


Allison Manley [AM]: Hello and welcome to The Secret Sauce, a short podcast by, that offers a little bit of advice to help your business run better.

I’m Allison Manley, Sales and Marketing Manager here, and today’s advice comes from Scott DiPerna and Lauren Byrwa. In this global economy, there are infinite ways and opportunities to work remotely, no matter where you are. Scott and Lauren are going to share some strategies on how to collaborate successfully across great distances and time zones.

Scott DiPerna [SD]: Hi, I’m Scott DiPerna.

Lauren Byrwa [LB]: Hi, I’m Lauren Byrwa.

SD: Recently we worked with a client in California who had hired a content strategy team in New York City. Lauren, with our development team, was in Chicago, and I, as the Project Manager, was in South Africa. We had lots of interesting new challenges in this project, and like we do in most projects, we learned a lot about working well with our clients, our collaborators, and with each other.

LB: So, Scott, what was it like trying to work from South Africa, being seven to nine hours ahead of everyone else?

SD: Well, it wasn’t that different from working remotely in Richmond, Virginia. I do shift my working hours to the evening to overlap with the team in the States. But just as I did in Virginia, we do all of our meetings on a video chat regardless of where we are. It’s part of our process especially with our clients being all over the country, so that part wasn’t really different. But we did do a few things differently in this project — not so much because we were all in different places, but because we had multiple vendors and teams collaborating together. Do you want to talk about some of the adjustments that we made in terms of meetings?

LB: Yeah, so we met with the content strategy team weekly. We met with our product owner three times a week. We met with our full team, our full team of stakeholders, weekly. And in addition to that we still had all our usual agile ceremonies like scrum, demos, retrospectives, that we always do on projects. These meetings especially were productive because we had all of the strategic functionality up front, and we could ask specific implementation-level questions early on, and we could vet them both with the product owner specifically, with the strategists specifically, and with the entire group. But I think there are a few other ways that the thorough strategy helped. Do you want to talk about those?

SD: Sure. I think there were two parts specifically that were really helpful. Doing a lot of the strategic planning up front meant that the client was a lot more conversant in the details of the product that we were planning to build for them. We just had a lot more conversations with them up-front and could talk in detail. The other piece was having much of the functionality visually documented in wireframes that the strategy team kept current with changes in the functionality meant that the client always had a “picture” in their minds of what it was that we were talking about. When everyone is working remotely from one another, these kinds of visuals help conversations over video chat be infinitely more productive, which I think is something we see in all of our projects. So all of this planning had a really helpful impact on your ability to estimate the work up front, too. Do you want to talk a bit about that?

LB: Because we had the complete and canonical wireframes from the strategists we were able to fairly precisely estimate all of the functionality that they had scoped out in those wireframes. This meant that even before we started development, we were able to work with our product owner to go over in detail the scope of work we anticipated to be able to complete within their budget. We had many conversations with him about what features would be most important for their users, and were able to prioritize accordingly. It meant that we could talk about the specifics of our implementation in really granular detail internally, both with the strategists, both with the product owner. We collaboratively evaluated if there were options to streamline our implementation, and we were able to address specific questions that usually would not come up until user acceptance testing. All of these conversations resulted in updates to both the canonical wireframes that the strategists were maintaining, as well as the implementation documentation that we were maintaining on our end. And it meant that the picture that the strategists had, that they kept, that the clients had in their head, stayed the same. And it was all reflected in what they could expect to be spending on the implementation for development.

SD: Right. And since we were documenting those functional changes in the wireframes, we could capture that quickly and review it with the client in the middle of a sprint. And speaking of that sort of adjustment in the middle of a sprint, you started doing mini-demos of work in progress, demoing that to the product owner. Can you talk a little bit about why you shifted in that direction?

LB: Yeah, so because we already had all of these meetings set up, and because we already had those canonical wireframes that showed all of the functionality in the picture, we wanted to make sure that they could see the picture of their website, the implementation, as quickly as possible too. So when we had specific implementation questions about things that were spec-ed out in the wireframes, we would demo it for the client. And they could vet it, both for the client and the strategists, and come back to that . . . is this the best choice for the user. It meant that all of those questions of, is this the best route to go down, does this work the way that I anticipated it to, were answered not even before user acceptance testing — they were answered even before the demo. So we could pivot our strategy accordingly, and we did on a lot of issues.

SD: So given all of these constraints that we faced on the project, where we had a client in one part of the States, a content strategy team in another part of the States, even our own internal strategy team split up across continents, and a pretty sizeable project with some interesting technical projects to solve — what were some of the biggest take-aways that you had from that project?

LB: I think the number one thing that I took away from that project was that we can solve every problem together, and that we can come to a better conclusion when we come to it together. The collaborative effort with the strategy team to focus conversations through the lens of the primary audience really helped us anchor our strategy and our implementation in that primary user, and not in some of the other things that often derail projects. We had complete and thorough documentation both on the strategy level and on the implementation, and both of those were transparent to everyone accessing the project. And I think that really helped us to streamline the entire project.

SD: I think for me one of the other things is that we were able to form really good relationships both with the client and with the third-party team we were collaborating with. And that made all of our conversations run more smoothly. We were able to have fun even in the difficult phases of the project, and even going through tough negotiations around scope or functionality or budgets or stuff like that — having those good relationships and having that good level of communication with them just made the whole process go more smoothly.

AM: That’s the end of this week’s Secret Sauce. For more great tips, please check out our website at You can also follow us on twitter at @palantir. Have a great day!

We want to make your project a success.

Let's Chat.
Oct 17 2016
Oct 17

Composer is the de-facto dependency manager for PHP; it is therefore no surprise that it is becoming more common for Drupal modules to use Composer to include the external libraries it needs to function. This trend has rather strong implications for site builders, as once a site uses at least one module that uses Composer, then it becomes necessary to also use Composer to manage your site.
To make managing sites with Composer easier, Pantheon now supports relocated document roots. This feature allows you to move your Drupal root to a subdirectory named web, rather than serving it from the repository root. To relocate the document root, create a pantheon.yml file at the root of your repository. It should contain the following:
api_version: 1
web_docroot: true

With the web_docroot directive set to true, your web document will be served from the web subdirectory. Using this configuration, you will be able to use the preferred Drupal 8 project layout for Composer-managed sites established by the project drupal-composer/drupal-project. Pantheon requires a couple of changes to this project, though, so you will need to use the modified fork for Pantheon-hosted sites.

Installing a Composer-Managed Site

Pantheon has created an example repository derived from drupal-composer/drupal-project for use on Pantheon with a relocated document root. The URL of this project is:

There are two options for installing this repository: you may create a custom upstream, or you may manually push the code up to your Pantheon site.

Installing with a Custom Upstream

The best way to make use of this repository is to make a custom upstream for it, and create your Drupal sites from your upstream. The example-drops-8-composer project contains a couple of Quicksilver “deploy product” scripts that will automatically run composer install and composer drupal-scaffold each time you create a site. When you first visit your site dashboard after creating the site, you will see that the files created by Composer—the contents of the web and vendor directories—are ready to be committed to the repository. Pantheon requires that code be committed to the repository in order to be deployed to the test and live environments.

We’ll cover the workings of the Quicksilver scripts in a future blog post. In the meantime, you may either use the example-drops-8-composer project directly, or fork it and add customizations, if you are planning on creating several sites that share a common initial state.

Installing by Manually Pushing Up Code

If you don’t want to create an upstream yet, or if you are not a Pantheon partner agency, you can use the following Git instructions instead. Start off by creating a new Drupal 8 site; then, before installing Drupal, set your site to Git mode and do the following from your local machine:

$ git clone [email protected]:pantheon-systems/drops-8-composer.git my-site
$ cd my-site
$ composer install
$ composer drupal-scaffold

The “deploy product” Quicksilver scripts run during site create, so you will need to run composer install and composer drupal-scaffold yourself after you clone your site. Then, use the commands below to push your code up to the site you just created:

$ git add -A .
$ git commit -m “web and vendor directory from composer install”
$ git remote set-url origin ssh:[email protected]:2222/~/repository.git
$ git push --force origin master

Replace my-site with the name that you gave your Pantheon site, and replace ssh:[email protected]:2222/~/repository.git with the URL from the middle of the SSH clone URL from the Connection Info popup dialog on your dashboard.

Copy everything from the ssh:// through the part ending in repository.git, removing the text that comes before and after. When you run git push --force origin master, you will completely replace all of the commits in your site with the contents of the repository you just created.

Updating a Composer-Managed Site

Once your site has been installed from this repository, you will no longer use the Pantheon dashboard to update your Drupal version. Instead, you will manage your updates using Composer. Updates can be applied either directly on Pantheon, by using Terminus, or on your local machine.

Updating with Terminus

To use Terminus to update your site, download and install the Terminus Composer plugin, placing it in your ~/terminus/plugins directory. This plugin currently only works with Terminus version 0.12.0. If you are one of those time-travellers reading this blog from the future, you may find that the terminus-composer plugin works with the latest version of Terminus.

Once you have the plugin installed, you will be able to run composer commands directly on your Pantheon site:
$ terminus composer update --site=sitename --env=dev
Be sure that your site is in SFTP mode first, of course. Note that it is also possible to run other composer commands using the Terminus Composer plugin. For example, you could use terminus composer require drupal/modulename to install new modules directly on Pantheon.

Updating on Your Local Machine

If you have already cloned your site to your local machine, you may also run Composer commands directly on your site’s local working copy, and then commit and push your files up as usual.

Either way, you will find that managing your Drupal sites with Composer to be a convenient option—one that, sooner or later, you will find that you need to adopt. Give it a spin today, and see how you like the new way to manage Drupal code.

Topics Development, Drupal Planet, Drupal
Oct 17 2016
Oct 17

On September 28, 2016, The Drupal Association board hosted a public board meeting during DrupalCon Dublin. It was wonderful to connect with the community in person to share updates and answer questions.

Over the last few months, we provided an update on The Association’s current focus followed by department-specific updates. This board meeting shared highlights of specific areas including:

  • DrupalCon New Orleans
  • front page improvements
  • Membership campaigns

This public board packet provides links to those presentations along with updates on other programs. It also includes a dashboard of all our current work. You can also watch the video recording here.

We love hearing from the community. Contact us anytime to share your feedback or ask questions via email or @drupalassoc.

The next public board meeting will be on 21 November, 2016 at 7:00 am PT / 15:00 GMT. You can register for the meeting here.

Oct 17 2016
Oct 17

Once we have established ownership for all of the content within our web properties, it may be helpful to define the intended use of those properties next.

This may seem obvious and unnecessary to state, but in my experience it has been important to define the intended use of the web property that is currently being described. This ensures that everyone is on the same page and understands a common set of goals for the property.

Public-facing websites are commonly intended for the use of communicating information to audiences outside of an organization, which is why they are public and usually distinguished from private, inward-facing sites, such as an intranet, which is intended for the purpose of communicating information to internal audiences within an organization. Not everyone understands this, so it is important to establish the reasoning behind the existence of the property so as not to confuse it with the purpose of another property.

Occasionally, the intended use of a property will be defined in part by the negative, or by that which it is NOT intended to be used. For example, it may be useful to state that no part of a public site should be used for personal content, especially if alternative resources exist explicitly for that purpose.

Here is an example of how intended use is sometimes defined by the negative:

"Academic Department websites are intended for the use of communicating information about the department, its faculty, degree requirements, course offerings, policies, etc. Academic Department websites are not intended for hosting websites of individual faculty, websites based on grant funding, research projects, or specific course-related materials, or for private (i.e. password-protected) websites or applications."

The negative in this example addresses some misperceptions about the intended use of a site about a department by listing some common misuses of the site previously.

Here are some questions to consider for explaining your own intended use policy:

  • What is the primary purpose of the property or Website?
  • What are the secondary and tertiary purposes, if they exist? 
  • Are there any activities or content which occasionally find their way onto this property which should live elsewhere, and thus explicitly be listed as not intended for this property?
  • What are the “grey areas” or things which are unclear where they belong?
  • Is there a process for dealing with grey areas?
  • Who would help determine that process if it doesn’t exist? Intended use can be a controversial subject for many organizations, so think carefully and cautiously throughout this exercise.

Intended use can be a controversial subject for many organizations, so think carefully and cautiously throughout this exercise. I recommended gathering input from a broad range of representative stakeholders to discuss some of the stickier points before defining and presenting a plan that may draw criticism when reviewed by the larger organization.

As with most things, intended use should be based in reason and make sense to most people. That being said, there may be occasions in which some level of compromise is required in order to accommodate content that doesn’t have a home otherwise. This is typically okay in small amounts and for brief time-periods, until alternative solutions can be found.


This post is part of a larger series of posts, which make up a Guide to Digital Governance Planning. The sections follow a specific order intended to help you start at a high-level of thinking and then focus on greater and greater levels of detail. The sections of the guide are as follows:

  1. Starting at the 10,000ft View – Define the digital ecosystem your governance planning will encompass.
  2. Properties and Platforms – Define all the sites, applications and tools that live in your digital ecosystem.
  3. Ownership – Consider who ultimately owns and is responsible for each site, application and tool.
  4. Intended Use – Establish the fundamental purpose for the use of each site, application and tool.
  5. Roles and Permissions – Define who should be able to do what in each system.
  6. Content – Understand how ownership and permissions should apply to content.
  7. Organization – Establish how the content in your digital properties should be organized and structured.
  8. URLs – Define how URL patterns should be structured in your websites.
  9. Design – Determine who owns and is responsible for the many aspects design plays in digital communications and properties.
  10. Personal Websites – Consider the relationship your organization should have with personal websites of members of your organization.
  11. Private Websites, Intranets and Portals – Determine the policies that should govern site which are not available to the public.
  12. Web-Based Applications – Consider use and ownership of web-based tools and applications.
  13. E-Commerce – Determine the role of e-commerce in your website.
  14. Broadcast Email – Establish guidelines for the use of broadcast email to constituents and customers.
  15. Social Media – Set standards for the establishment and use of social media tools within the organization.
  16. Digital Communications Governance – Keep the guidelines you create updated and relevant.

Stay connected with the latest news on web strategy, design, and development.

Sign up for our newsletter.
Oct 17 2016
Oct 17

It isn't easy to build a strong community. Many event organizers work to bring people together for Drupal. Community Cultivation Grants are one tool to make the work a little easier. With a grant, you can strengthen the local community. You can help drive the adoption of Drupal.

Drupal Association members fund these grants. A few grant recipients have told us their stories. I'd like to share more about what has happened since the grants were awarded.

Andrey from DrupalCamp Moscow

Team DrupalCamp MoscowThe DrupalCamp Moscow 2014 organizers have connected with the organizers of other camps — DrupalCamp Siberia (in Novosibirsk in 2015) and DrupalCamp Krasnodar (in September 2016). They've shared experiences to inspire the communities in these other Russian cities.

Andrey tells us, "In Moscow, we don't have any large companies which offer Drupal services. Our community organizes all the local events. After DrupalCamp Moscow 2014, we've held more events than ever before. 6 meetups, 22 small meetings, a D8 Release Party and one Drupal burgers event have happened. We've had Drupal specialists from other cities of Russia and the world come to visit. New participants are always welcome here and we are seeing more and more of them."

Ricardo from Drupal Mexico City

DrupalCamp Mexico City 2014 group photoRicardo tells us, "We held another Drupal camp in Mexico City in 2015 with 250 attendees. In 2016, our dear fellows from Axai did the same in Guadalajara.

This year, we went off the island, just like Drupal 8 has, and we organized an even broader PHP event. It was amazing. The response was fantastic, we broke all our attendance records. We've grown the PHP Mexico community & PHP Way meetups and now have 1,000 members. Our attendees could become new Drupalists. But we expect to see new Drupalists come from the Symfony world.

We decided to have only one Drupal event in Mexico per year. In 2016, it was held in Guadalajara. If nobody else wants to organize an event at 2017, we'll probably do it again. If we organize the next Drupal event, it will probably happen together with a PHP event once again. Ultimately, community growth should be in concordance with demand growth. This hasn't happen here in CDMX, we are hopeful that it will."

Martha from Drupal Guadalajara

DrupalCamp Guadalajara group photoFrom Martha: "We attended DrupalCamp Costa Rica in September and continue being connected to the Drupal Latino Community. After Guadalajara camp, there is more local Drupal awareness. Our company has received training and quote requests since the camp."

Community Cultivation Grants do more than build connections in our community and grow our contributors. They also to help drive the adoption of Drupal.

Ivo from Drupal Roadshow Bulgaria

Drupal Roadshow BulgariaIvo says, "Since the roadshow, there we've met our goal of running a Drupal Academy. We now run the biggest Drupal Course at Software University in Bulgaria. We have more than 1200 registered students. Our activities were featured in the Bulgarian National Television.
We are also proud of another result of the roadshow. One of the larger Drupal shops in Bulgaria opened their second office in a small town. We introduced Drupal there."

Tom from DrupalCamp Vietnam (2016)

DrupalCamp Vietnam 2016 group photoTom tells us, "I'm an entrepreneur and angel investor. Helping people become prepared for the digital enterprise is fulfilling to me. I want to spend more time coaching young developers with IT career decisions. To help get them learn how to use Drupal as a versatile data/content modeling tool. Which can act as a key platform to integrate with many other FOSS too. Including MERN stack, Hadoop, Spark, Docker, Openstack, etc.

Technology is always changing. What sticks is the experience you gain by contributing to an open-source community such as Drupal."

We're excited to see grant recipients building relationships in our community. You can connect with community and make more grants possible by joining the Drupal Association today.

Oct 17 2016
Oct 17

on October 17th, 2016

Our Digital Strategist, Jim Birch, will be presenting on Holistic SEO and Drupal at BADCamp X, the 10th annual Bay Area Drupal Camp being held between October 20th - 23rd at the University of California in Berkeley.  This will be the second year in a row in which Jim will be participating in the event.

BADCamp is the largest regional conference dedicated to Drupal and open-source software with over 1600 attendees descending on the UC Berkeley campus for four days of presentations, trainings, summits, and sprints.

In 2015, Jim presented Optimizing Drupal 7 HTML Markup to a crowded room of Frontend developers and Site builders.

This year, Jim's focus is the modern state of Search Engine Optimization: How we at Xeno Media define best practices for technical SEO using Drupal and ideas on how to guide and empower clients to create the best content to achieve their goals.

This presentation will review:

  • What Holistic SEO is, and some examples.
  • The most common search engine ranking factors, and how to keep up to date.
  • An overview of Content strategy and how it can guide development.
  • An overview of technical SEO best practices in Drupal.

The presentation is:

In addition, Jim will be giving a lightning talk on Friday at the Frontend summit. Summits are more conversational in nature, and this event will focus on the best practices, and technologies used in Drupal development with presentations and panel discussions. 

Jim will be showcasing our soon-to-be-released Drupal contrib module Bootstrap Paragraphs.  The module is a suite of Paragraph bundles made with the Bootstrap framework's markup.

Oct 17 2016
Oct 17

The Americans with Disabilities Act was a landmark civil rights legislation that tore down barriers preventing individuals with disabilities from fully participating in society. This bill covered important aspects of life in the 1990s, such as public transportation and employment. A decade and a half later these things are still important, but technologies have emerged that raise new questions about how they can be made accessible for all users.

This year, I had the privilege of attending the 2016 Accessibility Summit, where presenters from organizations such as the W3 Consortium, Adobe, and WebAIM talked about ways in which we can make the web more accessible to users with disabilities such as low vision, blindness, deafness, and limited dexterity.

One of my biggest takeaways was that I had been thinking about accessibility all wrong. Initially, I saw accessibility guidelines as a checklist. Although lists are published by thought leaders such as Google (, it’s entirely possible for a website to adhere to accessibility criteria without effectively meeting the needs of disabled users.

While checklists are useful, they lack a human element. It helps to view accessibility as a holistic approach to design, development, and content that, at its core, relies on empathy and understanding of a wide range of user experiences.

Accessibility issues are ultimately user experience issues.

How do you bake accessibility into your process? Below are some ideas of how accessibility may become an inherent part of creating a website:

Create personas for disabled users to address accessibility. Some examples might be:

  • A person with low vision or blindness
  • A person who has recently suffered a stroke
  • A person who is positioned in the glare of the sun

The World Wide Web Consortium has created a diverse set of personas representing disabled users, which are available on their website ( There are also solutions provided for common problems these users might face on the web.

Create tickets based on disabled user personas. These tickets should have specific quantifiable success criteria, such as: “a person with vision impairment can fill out this form.” This is a great platform to demonstrate to clients how accessibility is being achieved.

From the onset, design with accessibility in mind. Designers should familiarize themselves with accessibility guidelines and incorporate them into their work starting with the earliest concepts.

For instance, whenever I’m working with text, I run potential colors for both the text and the background through a contrast checker (such as this one: Contrast checkers confirm whether the combined text color and background color will be readable. Such measures preempt the need to rethink the design later in the process, thus saving time and avoiding the pitfalls of presenting a client with ideas that cannot be realized.

When designing the UI, aim for fewer steps to task completion. By decreasing the number of keystrokes, steps, and time required to complete tasks, we can make websites more accessible for everyone.

A task that is mildly annoying to complete for an abled user can be prohibitively time-consuming and frustrating for a disabled user. All users can benefit from the simplification of tasks, but disabled users will be especially impacted.

During development, take advantage of the wide range of auditing tools available to check whether your site adheres to accessibility guidelines. Some popular tools include:

Navigate and complete tasks on your website with the tools available to disabled users. This doesn’t replace user testing, but it can provide some useful insights. Here are some ideas:

  • Try using a screen reader, such as Chrome Vox. This helps you confirm whether your interactive elements are labeled clearly.
  • Change your iPhone’s accessibility settings and practice using your website with voiceover. Go into “Settings > General > Accessibility.”
  • Try using your keyboard only.

You can implement these techniques when demoing websites to clients to help them understand how different users access websites.

Recruit individuals with disabilities to participate in usability testing. This is the best way to confirm whether a website truly is accessible. Remember that accessibility guidelines are just the starting point. A site that checks off all the boxes may still have roadblocks for disabled users.

Disability rights lawyer, Lainey Feingold, provides on her website a list of nonprofits that offer usability testing by disabled individuals (

Additionally, here are some tips and tricks for creating accessible websites. 

  • Use flyingfocus.js ( to add a sense of movement to the :focus state of interactive elements. This enhances the user experience of tabbing through a web page. Check out some examples here:
  • Sighted users who tab through a website also benefit from the addition of skip links. (However, non-sighted keyboard users do not use skip links because screen readers have ways of getting around repetitive content.) What’s a skip link? It is a link at the top of the page that the user can select to “jump” to the main content. This saves time that would otherwise be spent tabbing through the site header on every page. A skip link can be styled so that it is hidden until a user tabs over it. Adding a css transition so the hidden link is visible for a fraction of a second longer will ensure that the user doesn’t miss it.
  • Aria roles are useful, but employ them sparingly. Overuse of aria roles can actually diminish the accessibility of a website.
  • It’s dangerous to make assumptions about how users are interacting with and viewing your website. For instance, desktop users who zoom in on websites will wind up activating the mobile view. And some mobile users navigate with a keyboard.
  • Rather than “Read More” links, be specific: “Read more about ---.” This is important for users with screen readers who are unable to see the association between the “read more” link and the content it references. You can visually hide “---” so that only screen readers will detect it.
  • Blocks of text should be no more than 80 characters or glyphs wide (40 if CJK). 45-75 characters is considered ideal.
  • Avoid using screen reader detection. Visually impaired individuals do not want to feel like they are being treated differently or tracked.
  • Avoid using tabindex with positive values. Instead, structure your markup for logical and intuitive navigation.
  • Use tabindex(-1) to set focus on things that aren’t natively focusable.

Stay connected with the latest news on web strategy, design, and development.

Sign up for our newsletter.
Oct 17 2016
Oct 17

I had a wild aha moment last week while I was away at my first PM confrence. I work in web and I'm a project manager. I thought I “got it”. Except, I guess I didn’t.

It wasn’t until I was surrounded in a ballroom of my peers, hearing Brett Harned's Army of Awesome rallying cry, seeing the words blown up on a screen that I realized, Oh my god. I'm not a glorified secretary.

I may not be the one coding, designing, or deploying a product, but what I do matters. It makes a difference. I'm part of my team in a tangible way. And a there are others like me. 

Similar to DrupalCon, the Digital PM Summit is a conference that travels around the US from city to city each year. This year it landed in San Antonio, a hop-skip and two-hour drive from my home in Austin, Texas.  

As a seasoned event manager, I tend to have a pretty agnostic relationship towards attending conferences. Speakers present their topics. Attendees politely paid attention, or didn’t. The draw of a glowing macbook is hard competition against topics which don't directly apply to me and the work I do.

But this time was different. For once, I not only understood the scheduled topics, I wanted to attend them. For once, I had trouble choosing. I was even excited to talk to strangers, not something that comes easy to me, because we already have something in common.

My world was rocked.

Over the course of three days, speakers and attendees shared tools, processes, tips, and horror stories of life in the PM trenches. It was quite cathertic and theraputic to be surrounded by people who understand and empathize, because they live it, too.

Talking to other digital project managers this weekend was invaluable, and something I didn’t realize I was missing out on. Turns out, I wasn't the only one. While a handful of the attendees were newbies, like me, many others remember their first Digital PM Summit fondly. All these same warm-fuzzies I was feeling was part of the reason they come back.

Here’s a few of my biggest takeaways, many of which were reiterated by different people, in various situations, throughout the course of the event:

1. The struggles and challenges I face as a PM are normal. I'm not on actually fire, and nobody has died.

2. Early and honest communication helps solve and prevent problems.

3. Problems aren’t always external. Internal scope creep is real. 

4. Nobody's figured out how to virtually replicate an in-person whiteboard brainstorming session.

5. Project Managers should carve out time for themselves and often don’t. 

6. The importance of empathy, building relationships, and treating people like humans. 

Side note: If you haven't seen Derek's DrupalCon Dublin session on perfectionism or read Brené Brown's work on vulnerability and you work with people, do yourself a favor, and get caught up. 

As you can probably gather, DPM was quite a touchy-feely event, something that's not the most comfortable thing in the world for me. I think that twinge of discomfort helped me appreciate the honesty and open dialogue even more. For me, this event was professionally and personally beneficial and I've come home better prepared to work with my team, to engage with my clients, and to better appreciate and respect the work that I do. That we all do. 

If you're a PM, and you haven't heard this at all or lately, you are awesome. Your work matters. You're here because you're needed.

Oct 17 2016
Oct 17

This blog post covers 50% of what I’m going to be talking about at this year’s Museums Computer Group conference at the Wellcome Trust in London, held on the 19th of October. The follow-up post will go live on the 20th - you can sign up here to be notified.

There are three important things you need to understand about artificial intelligence:

  1. It exists now
  2. It’s more capable than you know.
  3. It will replace you (unless you redefine who you are).

I’ll examine each one - it’ll be interesting to see what you think.

It exists now

Artificial intelligence, as a concept, has been around for a long time. From Hephaestus building the “fighting machines of the gods” to Mary Shelley’s Frankenstein, humans have thought about and created stories around our desire to replace the mighty gods with ourselves for several thousand years.

We see it today with modern franchises like “The Terminator” and “The Matrix”, and less recently with “2001: A Space Odyssey” and “Do Androids Dream of Electric Sheep/Blade Runner”. AI is so popular and accessible as a concept in mainstream media that if you ask people what they think it is, they think they’ll be able to answer.

But here’s the rub - it’ll sound something like “Killer Robots in the Future!”. Rarely do you hear anyone mention the effect of AI on the stock market or the Amazon website. I find this lack of awareness about AI frustrating and frightening because AI exists now, it has existed for decades, and it impacts almost every aspect of our daily lives.

The birth of AI as a research area happened in 1956 at Dartmouth College in New Hampshire as a small but well-funded programme that hoped to create a truly intelligent “thinking machine” within a generation. They failed of course, as creating intelligence isn’t particularly easy. But they laid the foundations. With others following, the acceleration of our understanding and the number of practical uses for AI has increased. And like most technologies, the rate of improvement in AI can be plotted as an S curve.

Technologies tend to have a slow adoption rate early on as a result of the limited capabilities they offer. As the offer increases, so does the adoption rate. Unlike exponential growth, the S curve understands and plots the reduction of the technology’s popularity as we find its maximum potential, or as market forces push funding into new technologies which will ultimately replace it.

A better way to picture the impact of a specific technology on our lives is as a game of chess. At the start of the game, the choices you make are small and of little significance. You can recover from a mistake. But by the time you reach the middle of the board every decision you make will have large significance, and each creates a win or lose situation. Games of chess also follow the S curve.

By the time a technology reaches the lower midpoint of the curve it starts to have a major visible impact, with the velocity of that impact suddenly starting to increase. Artificial intelligence is now at that point. AI is not just here, it’s a punk teenager and pissed off at it’s parents.

It’s more capable than you know.

You interact with AI and one of its children, Machine Learning, every time you use any web-connected device. You do it every time you search, shop online, fill out a form, send a Tweet, or upvote a comment. It even happens when you buy petrol, turn on the tap, drive your car to work, or buy any newspaper. Every aspect of your life is measured, stored, and used at some point by an algorithm.

Everything you do while living your life is kept as data so that machines can later parse it and use it to identify patterns. The effect is huge.

Roughly 70% of the world’s financial trading is controlled directly by 80 computers that use machine learning to improve their own performance. They can recognise an opportunity and carry out a purchase or sale within 3 milliseconds. The speed at which they operate means that humans are not only incapable of being part of the process, but have been designed out of the system completely to reduce error.

AI is rapidly getting to the point where it is better at diagnosing medical conditions than teams of doctors. Every patient report, every update to a patient’s condition, and every case history is available as digital data to be parsed, analysed and scored in real time to diagnose conditions that require a breadth of knowledge no single person has. In one case from Japan, AI was used to solve (in 10 minutes) a cancer diagnosis that oncologists had failed to detect (the human doctors had spent months trying).

Statistically, computers are better drivers than people are. In the 1.4 million miles Google’s fleet of self driving cars have covered on public roads, “not one of the 17 small accidents they’ve been involved in was the fault of the AI”. There’s the Google car driving into a bus that happened recently, but deep analysis of the incident showed that the bus actually drove into the car. A study by Virginia Tech showed that Google’s autonomous systems are 2.5 times less likely to have a car crash than a human. Given some of the behaviour I’ve experienced on the roads, I think this is a pessimistic number. AI is also being used to fly planes, with pilots of the Boeing 777 on average spending “just seven minutes manually piloting their planes in a typical flight” (anonymous Boeing 777 pilot). The United States and British governments have had fully autonomous drones flying for well over a decade.

Computers are now writing articles, poems, and even screenplays. Netflix’s now famously complicated taxonomy may have been put in place by people, but it’s machines that use it to work out what the next hit TV show will be. Associated Press uses AI to deliver over 3000 reports per year, while Forbes’ earning reports are all machine generated. These aren’t lists of numbers - this is long-form copy. Many sports reports are now written using AI, and they are published instantly as soon as the game ends. Before the team has left the field, the articles are being read. A study by Christer Clerwall showed that when asked to tell the difference between machine or human-written stories, people couldn’t. I mean, can you tell which parts of this blog were written by a machine?

Computers are better at designing their own components than people are. In the 1990’s Dr Adrian Thompson of Sussex University wanted a test on what would happen if evolutionary theory was put to use by machines in building an electrical circuit. The circuit was simple - all it had to do was recognise a 1KHz tone and drop the output voltage to 0, or for a 10KHz tone an output of 5 volts. An algorithm iterated over 4000 times before finding the best possible circuit. The circuit was tested, and it worked perfectly. The surprising thing though was that nobody could explain how the circuit worked, or manage to produce a better one. This experiment has been repeated many times, with more and more complexity introduced, and each time the machines make parts for themselves better than people can.

Computers are creating art, helping to cure the sick, improving themselves, and taking care of complex or monotonous tasks. We let them drive us, shop for us, fly us, and treat us. We let them form opinions for us, and let them entertain us. Where do people fit in?

It will replace you (unless you redefine who you are)

A study in 2013 showed that 47% of jobs in the United States were at risk of being replaced by automated systems. And a lot has happened in 3 years.

While your interactions with AI can make your life easier and more pleasant, they are designed to achieve something more. Every time you do something that can be logged and compared, you are training the AI in human behaviour. We can’t stop living our lives, so how can we stop the machines taking over?

If you are coming to the Museum Computer Group conference on October the 19th, I’ll tell you!


About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web