Nov 01 2018
Nov 01

Content migration is a topic with a lot of facets. We’ve already covered some important migration information on our blog:

So far, readers of this series will have gotten lots of good process information, and learned how to move a Drupal 6 or 7 site into Drupal 8. This post, though, will cover what you do when your content is in some other data framework. If you haven’t read through the previous installments, I highly recommend you do so. We’ll be building on some of those concepts here.

Content Type Translation

One of the first steps of a Drupal to Drupal migration is setting up the content types in the destination site. But what do you do if you are moving to Drupal from another system? Well, you will need to do a little extra analysis in your discovery phase, but it’s very doable.

Most content management systems have at least some structure that is similar to Drupal’s node types, as well as a tag/classification/category system that is analogous to Drupal’s taxonomy. And it’s almost certain to have some sort of user account. So, the first part of your job is to figure out how all that works.

Is there only one ‘content type’, which is differentiated by some sort of tag (“Blog Post”, “Product Page”, etc.)? Well, then, each of those might be a different content type in Drupal. Are Editors and Writers stored in two different database tables? Well, you probably just discovered two different user roles, and will be putting both user types into Drupal users, but with different roles. Does your source site allow comments? That maps pretty closely to Drupal comments, but make sure that you actually want to migrate them before putting in the work! Drupal 8 Content Migration: A Guide For Marketers, one of the early posts in this series, can help you make that decision.

Most CMS systems will also have a set of meta-data that is pretty similar to Drupal’s: created, changed, author, status and so on. You should give some thought to how you will map those fields across as well. Note that author is often a reference to users, so you’ll need to consider migration order as well.

If your source data is not in a content management system (or you don’t have access to it), you may have to dig into the database directly. If you have received some or all of your content in the XML, CSV, or other text-type formats, you may just have to open the files and read them to see what you are working with.

In short, your job here will be to distill the non-Drupal conventions of your source site into a set of Drupal-compatible entity types, and then build them.

Migration from CSV

CSV is an acronym for “Comma-Separated Value”, and is a file format often used for transferring data in large quantity. If you get some of your data from a client in a spreadsheet, it’s wise to export it to CSV. This format strips all the MS Office or Google Sheets gobbledygook, and just gives you a straight block of data.

Currently, migrations of CSV files into Drupal use the Migrate Source CSV module. However, this module is being moved into core and deprecated. Check the Bring migrate_source_csv to core issue to see what the status on that is, and adjust this information accordingly.

The Migrate Source CSV module has a great example and some good documentation, so I’ll just touch on the highlights here.

First, know that CSV isn’t super-well structured, so each entity type will need to be a separate file. If you have a spreadsheet with multiple tabs, you will need to export each separately, as well.

Second, connecting to it is somewhat different than connecting to a Drupal database. Let’s take a look at the data and source configuration from the default example linked above.

migrate_source_csv/tests/modules/migrate_source_csv_test/artifacts/people.csv




  1. id,first_name,last_name,email,country,ip_address,date_of_birth

  2. 1,Justin,Dean,jdean0@example.com,Indonesia,60.242.130.40,01/05/1955

  3. 2,Joan,Jordan,jjordan1@example.com,Thailand,137.230.209.171,10/14/1958

  4. 3,William,Ray,wray2@example.com,Germany,4.75.251.71,08/13/1962


migrate_source_csv/tests/modules/migrate_source_csv_test/config/install/migrate_plus.migration.migrate_csv.yml (Abbreviated)




  1. ...

  2. source:

  3.   plugin: csv

  4.   path: /artifacts/people.csv

  5.   keys:

  6.     - id

  7.   header_row_count: 1

  8.   column_names:

  9.     -

  10.       id: Identifier

  11.     -

  12.       first_name: 'First Name'

  13.     -

  14.       last_name: 'Last Name'

  15.     -

  16.       email: 'Email Address'

  17.     -

  18.       country: Country

  19.     -

  20.       ip_address: 'IP Address'

  21.     -

  22.       date_of_birth: 'Date of Birth'

  23. ...


Note first that this migration is using plugin: csv, instead of the d7_node or d7_taxonomy_term that we’ve seen previously. This plugin is in the Migrate Source CSV module, and handles reading the data from the CSV file.

  path: /artifacts/people.csv

The path config, as you can probably imagine, is the path to the file you’re migrating.  In this case, the file is contained within the module itself.




  1. keys:

  2. - id


The keys config is an array of columns that are the unique id of the data.




  1. header_row_count: 1

  2. column_names:

  3. -

  4. id: Identifier

  5. -

  6. first_name: 'First Name'

  7. -

  8. last_name: 'Last Name'

  9. ...


These two configurations interact in an interesting way. If your data has a row of headers at the top, you will need to let Drupal know about it by setting a header_row_count. When you do that, Drupal will parse the header row into field ids, then move the file to the next line for actual data parsing.

However, if you set the column_names configuration, Drupal will override the field ids created when it parsed the header row. By passing only select field ids, you can skip fields entirely without having to edit the actual data. It also allows you to specify a human-readable field name for the column of data, which can be handy for your reference, or if you’re using Drupal Migrate’s admin interface.

You really should set at least one of these for each CSV migration.

The process configuration will treat these field ids exactly the same as a Drupal fieldname.

Process and Destination configuration for CSV files are pretty much the same as with a Drupal-to-Drupal import, and they are run with Drush exactly the same.

Migration from XML/RSS

XML’s a common data storage format, that presents data in a tagged format. Many content management systems or databases have an ‘export as xml’ option. One advantage XML has over CSV is that you can put multiple data types into a single file. Of course, if you have lots of data, this advantage could turn into a disadvantage as the file size balloons! Weigh your choice carefully.

The Migrate Plus module has a data parser for XMl, so if you’ve been following along with our series so far, you should already have this capability installed.

Much like CSV, you will have to connect to a file, rather than a database. RSS is a commonly used xml format, so we’ll walk through connecting to an RSS file for our example. I pulled some data from Phase2’s own blog RSS for our use, too.

https://www.phase2technology.com/ideas/rss.xml (Abbreviated)




  1. <?xml version="1.0" encoding="utf-8"?>

  2. <rss ... xml:base="https://www.phase2technology.com/ideas/rss.xml">

  3.   <channel>

  4.     <title>Phase2 Ideas</title>

  5.     <link>https://www.phase2technology.com/ideas/rss.xml</link>

  6.     <description/>

  7.     <language>en</language>

  8.         <item>

  9.             <title>The Top 5 Myths of Content Migration *plus one bonus fairytale</title>

  10.             <link>https://www.phase2technology.com/blog/top-5-myths-content</link>

  11.             <description>The Top 5 Myths of Content Migration ... </description>

  12.             <pubDate>Wed, 08 Aug 2018 14:23:34 +0000</pubDate>

  13.             <dc:creator>Bonnie Strong</dc:creator>

  14.             <guid isPermaLink="false">1304 at https://www.phase2technology.com</guid>

  15.         </item>

  16.     </channel>

  17. </rss>


example_xml_migrate/config/install/migrate_plus.migration.example_xml_articles.yml




  1. id: example_xml_articles

  2. label: 'Import articles'

  3. status: true

  4. source:

  5.   plugin: url

  6.   data_fetcher_plugin: http

  7.   urls: 'https://www.phase2technology.com/ideas/rss.xml'

  8.   data_parser_plugin: simple_xml

  9.   item_selector: /rss/channel/item

  10.   fields:

  11.     -

  12.       name: guid

  13.       label: GUID

  14.       selector: guid

  15.     -

  16.       name: title

  17.       label: Title

  18.       selector: title

  19.     -

  20.       name: pub_date

  21.       label: 'Publication date'

  22.       selector: pubDate

  23.     -

  24.       label: 'Origin link'

  25.       selector: link
  26.     -

  27.       name: summary

  28.       label: Summary

  29.       selector: description

  30.   ids:

  31.     guid:

  32.       type: string

  33. destination:

  34.   plugin: 'entity:node'

  35. process:

  36.   title:

  37.     plugin: get

  38.     source: title

  39.   field_remote_url: link
  40.   body: summary

  41.   created:

  42.     plugin: format_date

  43.     from_format: 'D, d M Y H:i:s O'

  44.     to_format: 'U'

  45.     source: pub_date

  46.   status:

  47.     plugin: default_value

  48.     default_value: 1

  49.   type:

  50.     plugin: default_value

  51.     default_value: article


The key bits here are in the source configuration.




  1. source:

  2. plugin: url

  3. data_fetcher_plugin: http

  4. urls: 'https://www.phase2technology.com/ideas/rss.xml'

  5. data_parser_plugin: simple_xml

  6. item_selector: /rss/channel/item


Much like CSV’s use of the csv plugin to read a file, XML is not using the d7_node or d7_taxonomy_term plugin to read the data. Instead, it’s pulling in a url and reading the data it finds there. The data_fetcher_plugin takes one of two different possible values, either http or file. HTTP is for a remote source, like an RSS feed, while File is for a local file. The urls config should be pretty obvious.

The data_parser_plugin specifies what php library to use to read and interpret the data. Possible parsers here include JSON, SOAP, XML and SimpleXML. SimpleXML’s a great library, so we’re using that here.

Finally, item_selector defines where in the XML the items we’re importing can be found. If you look at our data example above, you’ll see that the actual nodes are in rss -> channel -> item. Each node would be an item.




  1.  fields:

  2. ...

  3.     -

  4.       name: pub_date

  5.       label: 'Publication date'

  6.       selector: pubDate

  7. ...


Here you see one of the fields from the xml. The label is just a human-readable label for the field, while the selector is the field within the XML item we’re getting.

The name is what we’ll call a pseudo-field. A pseudo-fields acts as a temporary storage for data. When we get to the Process section, the pseudo-fields are treated essentially as though they were fields in a database.

We’ve seen pseudo-fields before, when we were migrating taxonomy fields in Drupal 8 Migrations: Taxonomy and Nodes. We will see why they are important here in a minute, but there’s one more important thing in source.




  1.  ids:

  2.     guid:

  3.       type: string


This snippet here sets the guid to be a unique of the article we’re importing. This guarantees us uniqueness and is very important to specify.

Finally, we get to the process section.




  1. process:

  2. ...

  3. created:

  4. plugin: format_date

  5. from_format: 'D, d M Y H:i:s O'

  6. to_format: 'U'

  7. source: pub_date

  8. ...


So, here is where we’re using the pseudo-field we set up before. This takes the value from pubDate that we stored in the pseudo-field pub_date, does some formatting to it, and assigns it to the created field in Drupal. The rest of the fields are done in a similar fashion.

Destination is set up exactly like a Drupal-to-Drupal migration, and the whole thing is run with Drush the exact same way. Since RSS is a feed of real-time content, it would be easy to set up a cron job to run that drush command, add the --update flag, and have this migration go from one-time content import to being a regular update job that kept your site in sync with the source.

Migration from WordPress

WordPress export screenshotA common migration path is from WordPress to Drupal. Phase2 recently did so with our own site, and we have done it for clients as well. There are several ways to go about it, but our own migration used the WordPress Migrate module.

In your WordPress site, under Tools >> Export, you will find a tool to dump your site data into a customized xml format. You can also use the wp-cli tool to do it from the command line, if you like.

Once you have this file, it becomes your source for all the migrations. Here’s some good news: it’s an XML file, so working with it is very similar to working with RSS. The main difference is in how we specify our source connections.

example_wordpress_migrate/config/install/migrate_plus.migration.example_wordpress_authors.yml




  1. langcode: en

  2. status: true

  3. dependencies:

  4.   enforced:

  5.     module:

  6.       - phase2_migrate

  7. id: example_wordpress_authors

  8. class: null

  9. field_plugin_method: null

  10. cck_plugin_method: null

  11. migration_tags:

  12.   - example_wordpress

  13.   - users

  14. migration_group: example_wordpress_group

  15. label: 'Import authors (users) from WordPress WXL file.'

  16. source:

  17.   plugin: url

  18.   data_fetcher_plugin: file
  19.   data_parser_plugin: xml

  20.   item_selector: '/rss/channel/wp:author'

  21.   namespaces:

  22.     wp: 'http://wordpress.org/export/1.2/'

  23.     excerpt: 'http://wordpress.org/export/1.2/excerpt/'

  24.     content: 'http://purl.org/rss/1.0/modules/content/'

  25.     wfw: 'http://wellformedweb.org/CommentAPI/

  26.     dc: 'http://purl.org/dc/elements/1.1/'

  27.   urls:

  28.     - 'private://example_output.wordpress.2018-01-31.000.xml'

  29.   fields:

  30.     -

  31.       name: author_login

  32.       label: 'WordPress username'

  33.       selector: 'wp:author_login'

  34.     -

  35.       name: author_email

  36.       label: 'WordPress email address'

  37.       selector: 'wp:author_email'

  38.     -

  39.       name: author_display_name

  40.       label: 'WordPress display name (defaults to username)'

  41.       selector: 'wp:author_display_name'

  42.     -

  43.       name: author_first_name

  44.       label: 'WordPress author first name'

  45.       selector: 'wp:author_first_name'

  46.     -

  47.       name: author_last_name

  48.       label: 'WordPress author last name'

  49.       selector: 'wp:author_last_name'

  50.   ids:

  51.     author_login:

  52.       type: string

  53. process:

  54.   name:

  55.     plugin: get

  56.     source: author_login

  57.     plugin: get

  58.     source: author_email

  59.   field_display_name

  60.     plugin: get

  61.     source: author_display_name

  62.   field_first_name:

  63.     plugin: get

  64.     source: author_first_name

  65.   field_last_name:

  66.     plugin: get

  67.     source: author_last_name

  68.   status:

  69.     plugin: default_value

  70.     default_value: 0

  71. destination:

  72.   plugin: 'entity:user'

  73. migration_dependencies: null


If you’ve been following along in our series, a lot of this should look familiar.




  1. source:

  2. plugin: url

  3. data_fetcher_plugin: file
  4. data_parser_plugin: xml

  5. item_selector: '/rss/channel/wp:author'


This section works just exactly like the XML RSS example above. Instead of using http, we are using file for the data_fetcher_plugin, so it looks for a local file instead of making an http request. Additionally, due to the difference in the structure of an RSS feed compared to a WordPress WXL file, the item_selector is different, but it works the same way.




  1.     namespaces:

  2.       wp: 'http://wordpress.org/export/1.2/'

  3.       excerpt: 'http://wordpress.org/export/1.2/excerpt/'

  4.       content: 'http://purl.org/rss/1.0/modules/content/'

  5.       wfw: 'http://wellformedweb.org/CommentAPI/'

  6.       dc: 'http://purl.org/dc/elements/1.1/'


These namespace designations allow Drupal’s xml parser to understand the particular brand and format of the Wordpress export.




  1.    urls:

  2.       - 'private://example_output.wordpress.2018-01-31.000.xml'


Finally, this is the path to your export file. Note that it is in the private filespace for Drupal, so you will need to have private file management configured in your Drupal site before you can use it.




  1. fields:

  2. -

  3. name: author_login

  4. label: 'WordPress username'

  5. selector: 'wp:author_login'


We’re also setting up pseudo-fields again, storing the value from wp:author_login in author_login.

Finally, we get to the process section.




  1. process:

  2. name:

  3. plugin: get

  4. source: author_login


So, here is where we’re using the pseudo-field we set up before. This takes the value from wp:author_login that we stored in author_login and assigns it to the name field in Drupal.

Configuration for the migration of the rest of the entities - categories, tags, posts, and pages - look pretty much the same. The main difference is that the source will change slightly:

example_wordpress_migrate/config/install/migrate_plus.migration.example_wordpress_category.yml  (abbreviated)




  1. source:

  2. ...

  3. item_selector: '/rss/channel/wp:category'


example_wordpress_migrate/config/install/migrate_plus.migration.example_wordpress_tag.yml (abbreviated)




  1. source:

  2. ...

  3. item_selector: '/rss/channel/wp:tag'


example_wordpress_migrate/config/install/migrate_plus.migration.example_wordpress_post.yml (abbreviated)




  1. source:

  2. ...

  3. item_selector: '/rss/channel/item[wp:post_type="post"]'


And, just like our previous two examples, Wordpress migrations can be run with Drush.

A cautionary tale

As we noted in Managing Your Drupal 8 Migration, it’s possible to write custom Process Plugins. Depending on your data structure, it may be necessary to write a couple to handle values in these fields. On the migration of Phase2’s site recently, after doing a baseline test migration of our content, we discovered a ton of malformed links and media entities. So, we wrote a process plugin that did a bunch of preg_replace to clean up links, file paths, and code formatting in our body content. This was chained with the default get plugin like so:




  1. process:

  2. body/value:

  3. -

  4. plugin: get

  5. source: content

  6. -

  7. plugin: p2body


The plugin itself is a pretty custom bit of work, so I’m not including it here. However, a post on custom plugins for migration is in the works, so stay tuned.

Useful Resources and References

If you’ve enjoyed this series so far, we think you might enjoy a live version, too! Please drop by our session proposal for Drupalcon Seattle, Moving Out, Moving In! Migrating Content to Drupal 8 and leave some positive comments.

Oct 29 2018
Oct 29

Yesterday, big tech tripped over itself with IBM’s Red Hat acquisition--for the staggering sum of $34B. Many were shocked by the news, but those that know Red Hat well--may have been less surprised. Long the leader and largest open source company in the world: Red Hat has been getting it right for many years.

Still more shocking is how this fits an albeit new pattern for 2018 and beyond. One which is completely different than the typical enterprise software acquisition of the past. Red Hat is not the first mega tech deal of the year for the  open source community. (There was the $7.5B purchase of GitHub by Microsoft, and recently the $5.2B merger of big-data rivals Cloudera and Hortonworks.)

Now, with this much larger move by IBM, it brings us to consider the importance of open source value, and contribution culture-at-large.

This was a great acquisition target for IBM:

  • They have a powerful product suite for some of the more cutting edge aspects of web development including a secure and fully managed version of Linux, hybrid cloud, containerization technology and a large and satisfied customer base;

  • their products and technologies fit perfectly against IBM’s target market of enterprise digital transformation; and

  • the deal opens up a huge market to Red Hat via Big Blue.

And in the age we live--one focused on (and fearful of) security, privacy, data domiciles, and crypto tech--a $14B valuation, over market cap (a premium of $74/share), is a validation of the open source model shining sunlight on software to achieve more secure products.

At Phase2, this news comes with much interest. Red Hat is a company that we know very well for its contributions to open source and web technology, in general. We have worked with Red Hat since 2013 and come to respect them in several key ways.

As pioneers in the commercialization of open source, Red Hat popularized and legitimized the idea that the concept of open contribution and financial gain can co-exist. While our own experimentations with productization of open source over the years within the Drupal community were certainly less publicized, we, and ostensibly the ‘industry’, looked to Red Hat as the archetype for a modern business model that could work.

We’ve had the privilege of working for, and alongside, the Red Hat team to develop many of the company’s websites over the last five years, including Redhat.com and developers.redhat.com. Through these experiences, we have come to value the way in which they blend great talent, great culture, and open values.

On many occasions, we have even drawn parallels between their business culture and our own. After reading the Open Organization by Red Hat CEO Jim Whitehurst, I was struck by the values and culture of Red Hat and their similarities with how Phase2 similarly side-eyes the future. Perhaps it was their open source ethos, collaborative approach, or the meritocracy (vs. democracy or autocracy) they fostered, but I felt like we were emulating a “big brother”.

FInally, but perhaps most importantly, we respect them as a business. The pure fact that a larger-than-life brand like IBM would pay such a premium implies both strategic and business health. I believe that,  while in part it is earned from a strong repeatable subscription-based revenue stream, nothing creates business value like a great culture of amazing people, dependable customers, and undeniable innovation.

And now with IBM’s extended reach and additional resources, we look forward to Red Hat’s continued success and partnership.

Oct 01 2018
Oct 01

One of the most exciting additions to Drupal 8.6 is the new experimental Layout Builder. Many are focused on Layout Builder replacing Panels, Panelizer, Display Suite, and even Paragraphs. The clean and modular architecture of Layout Builder supports a multitude of different use cases. It can even be used to create a WYSIWYG Mega Menu experience.

Note: Experimental

While Layout Builder was first added as experimental to Drupal 8.5, it has changed significantly since and is now considered more "beta" than "alpha". While still technically experimental and not officially recommended for production sites, the functionality and architecture has stabilized with Drupal 8.6 and it's time to start evaluating it more seriously.

What is a Mega Menu?

For the purposes of this discussion, I'll define a "Mega Menu" as simply a navigation structure where each item in the menu can expand to show a variety of different components beyond a simple list of links.

In the above example example, we see a three column menu item with two submenus, a search form, and a piece of static content (or reference to another node).

Mega Menus present many challenges for a site including accessibility, mobile responsiveness, governance and revision moderation, etc. While I don't advocate the use of mega menus, sometimes they are an unavoidable requirement.

Past Solutions

I've seen many different implementations of Mega Menus over the years.

  • Modules such as we_megamenu (D8),  tb_megamenu (D7), etc.
  • Custom blocks (D8),
  • Hard-coded links, node references, and Form API rendered in theme,
  • MiniPanels rendered in the theme (D7)
  • Integrations with Javascript libraries such as Superfish
  • Custom site-specific code

These solutions had many problems and often didn't provide any easy way for site owners to make changes. Often these solutions caused headaches when migrating the site or supporting it over a long life cycle. I've known many teams who simply groan when a client mentions "we want mega menus."

Wouldn't it be nice if there was a consistent way in Drupal 8 to create and manage these menus with a component-based design architecture?

Layout Builder

The Layout Builder module can take control over the rendering of an entity view mode. Normally in Drupal, a view mode is just a list of fields you want to display, and in which order. These simplistic lists of fields are usually passed to a theme template responsible for taking the raw field data and rendering it into the designed page.

With Layout Builder, a view mode consists of multiple "sections" that can contain multiple "blocks." A "Section" references a specific "Layout" (2 column, 3 column, etc). Each field of the entity can be displayed via a new field_block. Thus, a traditional view mode is just a single section with a one-column layout filled with a block for each field to be displayed.

The core Layout Discovery module is used to locate the available "layouts" on the site that can be assigned to a Section. Core comes with one column, two column, and three column (33/33/33 and 25/50/25) layouts. Custom layout modules can be easily created to wrap a twig template for any layout needed within a section.

Blocks for each field can be added to a section, along with any other predefined or custom block on the site. Core also provides "inline blocks" that are instances of custom blocks referenced by the node but not shown in the global block layout admin view.

When an inline block is edited, a new revision of the block is created and a new revision of the node entity is created to reference it, allowing layout changes to be handled with the same workflow as normal content changes.

Section Storage

Layout Builder uses a Section Storage plugin to determine how the list of block uuids referenced in a layout section are stored. The default layout for a content type is stored in the third_party_settings for the specific view mode configuration. If node-specific overrides are enabled for the bundle, the overriding list of blocks in the section are stored within a layout_builder__layout field added to the node.

While the use of Layout Builder is focused on Nodes (such as Landing Pages), the Layout Builder architecture actually works with any entity type that supports the Section Storage. Specifically, any entity that is "fieldable" is supported.

Fieldable Menu Items

If Layout Builder works with any fieldable entity, how can we make a Menu Item entity fieldable? The answer is the menu_item_extras contrib module. This module allows you to add fields to a menu entity along with form and display view modes. For example, you can add an "icon" image field that will be displayed next to the menu link.

The Menu Item Extras module has been used in Drupal 8 for a while to implement mega menus via additional fields. However, in Drupal 8.6 you don't need to add your own fields, you just need to enable Layout Builder for the default menu item view display mode:

When you allow each menu link to have its layout customized, a layout_builder__layout field is added to the menu item to store the list of blocks in the sections. When you Add a Link to your menu, a new tab will appear for customizing the layout of the new menu link item:

The Layout tab will show the same Layout Builder UI used to create node landing pages, except now you are selecting the blocks to be shown on the specific menu item. You can select "Add Section" to add a new layout, then "Add Block" to add blocks to that section.

In the example above I have used the optional Menu Blocks module to add submenus of the Drupal Admin menu (Structure and Configuration) to the first two columns (default core menu blocks do not allow the parent to be selected, but the Menu Block contrib module adds that). In third column the Search Form block was added, and below that is an "Inline Block" using the core Basic Block type to add static text to the menu item.

Theming the Menu

The Menu Item Extras module provides twig templates for handling the display of the menu item. Each menu item has a "content" variable that contains the field data of the view mode, just like with any node view mode.

Each theme will need to decide how best to render these menus. Using a subtheme of the Bootstrap theme I created the following menu-levels.html.twig template to render the example shown at the beginning of this article:

<ul{{ attributes.addClass(['menu', 'menu--' ~ menu_name|clean_class, 'nav', 'navbar-nav']) }}>
 {% for item in items %}
   {% set item_classes = [
     item.is_expanded ? 'expanded',
     item.is_expanded and menu_level == 0 ? 'dropdown',
     item.in_active_trail ? 'active',
     ]
   %}
   <li{{ item.attributes.addClass(item_classes) }}>
     <a href="https://www.phase2technology.com/blog/creating-mega-menu-layout-builder/{{ item.url }}" class="dropdown-toggle" data-toggle="dropdown">{{ item.title }} <span class="caret"></span></a>
     <div class="dropdown-menu dropdown-fullwidth">
       {{ item.content }}
     </div>
   </li>
 {% endfor %}
</ul>

Summary

The combination of Layout Builder and Menu Item Extras provides a nearly WYSIWYG experience for site owners to create complex mega menus from existing block components. While this method still requires a contrib module, the concept of making a menu item entity fieldable is a clean approach that could easily find its way into core someday. Rather than creating yet another architecture and data model for another "mega menu module", this approach simply relies on the same entity, field, and view mode architecture used throughout Drupal 8.

While Layout Builder is still technically "experimental", it is already very functional. I expect to see many sites start to use it in the coming months and other contrib modules to enhance the experience (such as Layout Builder Restrictions) once more developers embrace this exciting new functionality in Drupal core.

My thanks to the entire team of developers who have worked on the Layout Initiative to make Layout Builder a reality and look forward to it being officially stable in the near future.

Jul 17 2018
Jul 17

If you're not familiar with GatsbyJS, then you owe it to yourself to check it out. It's an up and coming static site generator with React and GraphQL baked in, and it prides itself on being really easy to integrate with common CMS'es like Drupal.

In other words, Gatsby lets you use Drupal as the backend for a completely static site. This means you get a modern frontend stack (React, GraphQL, Webpack, hot reloading, etc.) and a fully static site (with all of the performance and security benefits that come along with static sites) while still keeping the power of Drupal on the backend. 

Let's give it a shot! In this post, we'll see just how simple it is to use Drupal 8 as the backend for a Gatsby-powered static site. 

Step 1: Set up Drupal

This step is super easy. You basically just have to install and configure the JSON API module for Drupal 8, and you're done. 

First off (assuming you already have a Drupal 8 site running), we'll just download and install the JSON API module.

composer require drupal/jsonapi
drupal module:install jsonapi

Now we just have to make sure we grant anonymous users read permission on the API. To do this, go to the permissions page and check the "Anonymous users" checkbox next to the "Access JSON API resource list" permission. If you skip this step, you'll be scratching your head about the endless stream of 406 error codes.

After this you should be all set. Try visiting http://YOURSITE.com/jsonapi and you should see a list of links. For example, if you have an "Article" content type, you should see a link to http://YOURSITE.com/jsonapi/node/article, and clicking that link will show you a JSON list of all of your Article nodes.

Working? Good. Let's keep moving.

Step 2: Install GatsbyJS

Now we need to work on Gatsby. If you don't have it installed already, run this to grab it:

npm install --global gatsby-cli

That'll give you the "gatsby" cli tool, which you can then use to create a new project, like so:

gatsby new YOURSITENAME

That command basically just clones the default Gatsby starter repo, and then installs its dependencies inside it. Note that you can include another parameter on that command which tells Gatsby that you want to use one of the starter repos, but to keep things simple we'll stick with the default.

Once complete, you have the basis for a working Gatsby site. But that's not good enough for us! We need to tell Gatsby about Drupal first.

Step 3: Tell Gatsby about Drupal

For this part, we'll be using the gatsby-source-drupal plugin for Gatsby. First, we need to install it:

cd YOURSITENAME
npm install --save gatsby-source-drupal

Once that's done, we just need to add a tiny bit of configuration for it, so that Gatsby knows the URL of our Drupal site. To do this, edit the gatsby-config.js file and add this little snippet to the "plugins" section:

plugins: [
  {
    resolve: `gatsby-source-drupal`,
    options: {
      baseUrl: `http://YOURSITE.COM`,
    },
  },
]

You're all set. That's all the setup that's needed, and now we're ready to run Gatsby and have it consume Drupal data.

Step 4: Run Gatsby

Let's kick the tires! Run this to get Gatsby running:

gatsby develop

If all goes well, you should see some output like this:

You can now view gatsby-starter-default in the browser.

  http://localhost:8000/

View GraphiQL, an in-browser IDE, to explore your site's data and schema

  http://localhost:8000/___graphql

Note that the development build is not optimized.
To create a production build, use gatsby build

(If you see an error message instead, there's a good chance your Drupal site isn't set up correctly and is erroring. Try manually running "curl yoursite.com/jsonapi" in that case to see if Drupal is throwing an error when Gatsby tries to query it.)

You can load http://localhost:8000/ but you won't see anything particularly interesting yet. It'll just be a default Gatsby starter page. It's more interesting to visit the GraphQL browser and start querying Drupal data, so let's do that.

Step 5: Fetching data from Drupal with GraphQL

Load up http://localhost:8000/___graphql in a browser and you should see a GraphQL UI called GraphiQL (pronounced "graphical") with cool stuff like autocomplete of field names and a schema explorer. 

Clear out everything on the left side, and type an opening curly bracket. It should auto-insert the closing one for you. Then you can hit ctrl+space to see the autocomplete, which should give you a list of all of the possible Drupal entity types and bundles that you can query. It should look something like this:

Entity autocomplete

For example, if you want to query Event nodes, you'll enter "allNodeEvent" there, and drill down into that object.

Here's an example which grabs the "title" of the Event nodes on your Drupal site:

{
  allNodeEvent {
    edges {
      node {
        title
      }
    }
  }
}

Note that "edges" and "node" are concepts from Relay, the GraphQL library that Gatsby uses under the hood. If you think of your data like a graph of dots with connections between them, then the dots in the graph are called “nodes” and the lines connecting them are called “edges.” You don't need to worry about this at the moment. For now, just get used to typing it.

Once you have that snippet written, you can click the play icon button at the top to run it, and you should see a result like this on the right side:

{
  "data": {
    "allNodeEvent": {
      "edges": [
        {
          "node": {
            "title": "Test node 1"
          }
        },
        {
          "node": {
            "title": "Test node 2"
          }
        },
        {
          "node": {
            "title": "Test node 3"
          }
        }
      ]
    }
  }
}

Note that this same pattern can give you pretty much any data you want from Drupal, including entity reference field data or media image URIs, etc. As a random example, here's a snippet from the Contenta CMS + GatsbyJS demo site:

{
  allNodeRecipe {
    edges {
      node {
        title
        preparationTime
        difficulty
        totalTime
        ingredients
        instructions
        relationships {
          category {
            name
          }
          image {
            relationships {
              imageFile {
                localFile {
                  childImageSharp {
                    fluid(maxWidth: 470, maxHeight: 353) {
                      ...GatsbyImageSharpFluid
                    }
                  }
                }
              }
            }
          }
        }
      }
    }
  }
}

Pretty cool right? Everything you need from Drupal, in one GraphQL query.

So now we have Gatsby and Drupal all set up and we know how to grab data from Drupal, but we haven't actually changed anything on the Gatsby site yet. Let's fix that.

Step 6: Displaying Drupal data on the Gatsby site

The cool thing about Gatsby is that GraphQL is so baked in that it assumes that you'll be writing GraphQL queries directly into the components/templates.

In your codebase, check out src/pages/index.js and you should see some placeholder content. Delete everything in there, and replace it with this:

import React from 'react'

class IndexPage extends React.Component {

  render() {
    const pages = this.props.data.allNodePage.edges
    const pageTitles = pages.map(page => <li>{page.node.title}</li>)
    return <ul>{pageTitles}</ul>
  }
}

export default IndexPage

export const query = graphql`
  query pageQuery {
    allNodePage {
      edges {
        node {
          title
        }
      }
    }
  }
`

(Note, this assumes you have a node type named "Page"). 

All we're doing here is grabbing the node titles via the GraphQL query at the bottom, and then displaying them in a bulleted list. 

Here's how that looks on the frontend:

Gatsby showing node titles

And that's it! We are displaying Drupal data on our Gatsby site!

Step 7: Moving on

From here, you'll probably want to look at more complex stuff like creating individual pages in Gatsby for each Drupal node, or displaying more complicated data, or resizing Drupal images, etc. Go to it! The static site is your oyster! 

When you're happy with it, just run "gatsby build" and it'll export an actual static site that you can deploy anywhere you want, like Github Pages or Amazon S3 or Netlify.

Have fun!

Jul 12 2018
Jul 12

One of the most fundamental tasks of back-end Drupal 8 development is learning how to capture and utilize data. Unfortunately, as a new developer, trying to do so feels like wandering into an endless labyrinth of arrays, methods, objects, and arcane wizardry.

Say you want to get the summary off of a body field so you try something like $node->values['field_article_body'][0]['summary'], but  that doesn’t work. So you remember that you probably need to use a get method and you remember seeing something like $node->getValue('field_article_body') before, but that doesn’t work either.

Suddenly you find yourself lost in the labyrinth and desperately hoping for one of your guesses to be correct only to eventually get eaten by a minotaur (read: get frustrated and give up). Now, If you remember your Greek mythology, the way Theseus was able to triumph over the labyrinth where others had failed was by tying some twine to himself so that he could retrace his steps. The point of this blog post is to give you that twine so that next time you don’t have to guess your way to the data.

Remember your training

First, remember that D8 is based on object oriented programming (OOP) and think about what that really means. While it is indeed intricate, at its core it’s really just a series of classes and objects that extend off of each other. Plugins, entities, blocks, services, etc. might sound like complex concepts, but at the end of the day these are all just different kinds of classes with different uses and rulesets.

For a long time I was conscious of D8’s OOP nature, but I really only thought of it in terms of building new things from that foundation. I never thought about this crucial principle when I was trying to pull data out of the system, and pulling in my OOP knowledge was the first step in solving this problem.

Down into the labyrinth

Let’s take a simple example. Say you have the following node loaded and you want to use the title of the node.

(note: these screenshots are from xdebug, but you can get the same information by printing the variables to the page using var_dump() or kint())

After digging around in the node you find the title here:

But as we’ve already established, something like $node->values['title'] won’t work. This is because $node is  not simply an array, it’s a full object. Near the top you’ll notice that Xdebug is telling us exactly what class is creating the object below, Drupal\node\Entity\Node. If you go to that file you will see the following method on that class that will get you the data that you need:

public function getTitle() {
  return $this->get('title')->value;
}

Meaning, you can just run $node->getTitle() to get that nodes’s title. Notice the host of other useful functions there as well, getCreatedTime(), getOwner() ,postSave(). All of these methods and more are available and documented for when you want to manipulate that node.

These aren’t the only methods you have available to you. In fact, if you look at the actual code in the getTitle() function you’ll see that it’s using a get method that’s nowhere to be found in this class. The rules of OOP suggest that if the method is useable but not in the class itself it’s probably being extended from another class. In fact, the class declaration for node extends EditorialContentEntityBase, which might not have anything useful on its own but it does extend ContentEntityBase which holds a plethora of useful methods, including the aforementioned get function!

public function get($field_name) {
  if (!isset($this->fields[$field_name][$this->activeLangcode])) {
    return $this->getTranslatedField($field_name, $this->activeLangcode);
  }
  return $this->fields[$field_name][$this->activeLangcode];
}

Notice how this get method seems to be designed for getting field values, we could probably get closer to the summary value I mentioned earlier by going $node->get(‘field_article_body'). If you run that method you get a different object entirely, a FieldItemList.

Once again, we can dig through the provided classes and available methods. FieldItemList extends ItemList which has a getValue() method which gets us even closer.

Now, instead of an object, we’re returning a simple array, which means we can use regular array notation to finally get that summary value: $node->get('field_article_body')->getvalue()[0]['summary'].

So what did we actually do?

Pay special attention to the structure of this final call we’re using. Parsing it out like so demonstrates that it’s no mere guess-and-check, rather it’s a very logical sequence of events.

/** @var \Drupal\node\Entity\Node $field */
$field = $node->get('field_article_body'); // Object notation to get a property from an object

/** @var \Drupal\Core\Field\FieldItemList $field_values*/
$field_values = $field->getvalue(); // Object notation to get a property from an object

/** @var array $summary */
$summary = $field_values[0]['summary']; // Array notation to get the value from an array

This also makes it obvious why our previous attempt of $node->values['title'] can’t work. It’s trying to get a values property off node object, when such a thing doesn’t exist in the node class declaration.

Rule-breaking magic!

That being said, another perfectly valid way to get the summary field is $node->field_article_body->summary. Now, on first glance, this appears to contradict what I just said. The $node object obviously doesn’t have a field_article_bodyproperty in its class declaration. The reason this works is because it is successfully calling a magic method. PHP has  a number of magic methods that can always be called on  an object, these methods are easy to find because they start with a double underscore (__set(), __get(), __construct(), etc.). In this case, since we’re attempting to call a property that does not exist on the class, Drupal knows to use the __get()magic method to look at the properties on this instance of the object, and this instance is a node with a field named field_article_bodyand by definition a property of the same name. If you look further down the ContentEntityBase class you’ll see this very &__get()method.

PHPstorm Shortcuts

It’s worth noting that if you’re writing your code in an IDE like PHPstorm, this whole process gets a lot easier because the autocomplete will show you all the methods that are available to you regardless of which class they come from. That being said, being able to manually drill down through the classes is still useful for when you need a clearer idea of what the methods are actually doing.

Another amazing PHPstorm tool is being able to jump directly to the declaration of a method to see where it’s coming from. For instance in the aforementioned getTitle() method, you can right click the contained get() method and select Go To > Declaration to jump directly to the section of ContentEntityBase where that get() method is first declared.

Whats Next?

Don’t worry if this doesn’t make perfect sense your first time through, this is a complex system. It might help to think of this as simply reverse Object Oriented Programming. Rather than using OOP to build something new, you’re reverse-engineering it to grab something that already exists. I recommend messing around with this technique in your own Drupal environment. Try it with different object types, try it in different contexts (for instance, in an alter hook versus in a plugin). The more practice you get, the more comfortable you’ll feel with this essential technique.

Thanks for reading Part 1 of my Backend Drupal 8 series. Check back soon for Part 2, where I’m going to try to convince you to start using Xdebug.

Apr 16 2018
Apr 16

There’s no doubt that the digital landscape looks very different these days. When we talk about an organization's digital presence we are talking about a whole lot more than websites or content management systems.  

At Drupalcon Nashville, we got down to business with our Drupal community, partners and clients to discuss where Drupal fits into this new digital ecosystem, customer experience trends, Drupal 8 best practices, and how to maintain a competitive digital experience platform in this fast-moving, ever-changing market.

The Customer Experience Landscape

61% think that chatbots allow for faster resolution for customer service answers.

Source: Aspect Software Study

Almost 3⁄4 of regular voice technology users believe brands should have unique voices and personalities for their apps.

Source: SONARTM, J. Walter Thompson's proprietary in-house research tool

In short, audiences want to engage with brands on the channels they are already using. These customer experience (CX) expectations are driving channel explosion. With the proliferation of channels and new digital touchpoints, organizations are forced to undergo digital transformation to stay relevant and competitive. Our own Jeff Walpole addressed these market trends in his Drupalcon session: Beyond Websites: The Digital Experience Platform

Massive Discrepancy Between Brands and Consumers

A recent eConsultancy report indicated that 81% of consumer brands believe they have a holistic view of their customers. Conversely, only 37% of consumers feel that they are actually understood by their favorite brands.

It’s clear that understanding the customer and their experience with your brand is essential to developing a competitive presence.  This year, our Drupalcon booth theme addressed this directly with the rallying cry: “Create the experience. Deliver the results.” We asked Drupalcon attendees to engage with our interactive data visualization board to crowd-source the community’s thoughts on the impact of customer experience.

Phase2, Director of Marketing discussing customer experience at DrupalCon Booth

As we explained in our booth, leveraging engagement data is essential to successful customer experience. We took the booth experience further by creating a digital experience that seamlessly flowed from the in-person experience to our attendees' phones using Acquia Journey, a customer journey orchestration tool allowing us to serve up a personalized experience for each attendee.

[embedded content]

The Need for Drupal to Evolve

Just as brands need to evolve to meet the demands of their customers, Drupal needs to evolve to engage the right audience and compete with digital experience platforms like Adobe. We were thrilled to participate in the Drupal Associations marketing fundraiser to raise funds to support more marketing material for Drupal.

screenshot of Drupal marketing fundraising efforts

 

As we grow and transform with the market, culture becomes more important than ever. Our own culture expert, Nicole Lind was a featured speaker this year discussing Why Building Awesome Culture is Essential, Not Just a Nice-to-Have

Drupalcon is always an inspirational and energizing event, with a week full of great sessions, critical discussions and perhaps too much hot chicken. We look forward to our continuing work with the community building the impactful digital experience platforms with Drupal.
 

Feb 13 2018
Feb 13

In this post, we’ll begin to talk about the development considerations of actual website code migration and other technological details. In these exercises, we’re assuming that you’re moving from Drupal 6 or 7 to Drupal 8. In a later post, I will examine ways to move other source formats into Drupal 8 - including CSV files, non-Drupal content management systems, or database dumps from weird or proprietary frameworks.

Migration: A Primer

Before we get too deep into the actual tech here, we should probably take a minute to define some terms and explain what’s actually happening under the hood when we run a migration, or the rest of this won’t make much sense.

When we run a migration, what happens is that the Web Server loads the content from the old site, converts it to a Drupal 8 format, and saves it in the new site.  Sounds simple, right?

Actually, it pretty much is that simple. At least, conceptually. So, try to keep those three steps in mind as we go through the hard stuff later. Everything we do is designed to make one of those three steps work.

Key Phrases

  • Migration: The process of moving content from one site to another. ‘A migration’ typically refers to all the content of a single content or entity type (in other words, one node type, one taxonomy, and so on).

  • Migration Group: A collection of Migrations with common traits

  • Source: The Drupal 6 or 7 database from which you’re drawing your content (or other weird source of data, if applicable)

  • Process: The stuff that Drupal code does to the data after it’s been loaded, in order to digest it into a format that Drupal 8 can work with

  • Destination: The Drupal 8 site

Interestingly, each of those key phrases above corresponds directly to a code file that’s required for migration. Each Migration has a configuration (.yml) file, and each is individually tailored for the content of that entity. As config files, each of these is pretty independant and not reusable. However, we can also assign them to Migration Groups. Groups are also configuration (.yml) files. They allow us to declare common configurations once, and reuse them in each migration that belongs to that group.

The Source Plugin code is responsible for doing queries to the Source database, retrieving the data, and formatting it into PHP objects that can be worked on. The Process Plugin takes that data, does stuff to it, and passes it to the next step. The Destination Plugin then saves it in Drupal 8 format.  Rinse, repeat.

On a Drupal-to-Drupal migration, around 75% of your time will be spent working in the Migration or Migration Group config, declaring the different Process Plugins to use. You may wind up writing one or more Process Plugins as part of your migration development, but a lot of really useful ones are included in Drupal core migration code and are documented here. A few more are included with Migrate Plus.

Drupal 8 core has Source Plugins for all standard Drupal 6 and Drupal 7 entity types (node, taxonomy, user, etc.). The only time you’ll ever need to write a Source plugin is for a migration from a source other than Drupal 6 or 7, and many of these are already available as Contrib modules.

Also included in Drupal core are Destination Plugins for all of the core entity types. Unless you’re using a custom entity in Drupal 8, and migrating data into that entity, you’ll probably never write a Destination Plugin.

Development Foundations

There are a few key requirements you need to have in place before you can begin development.  First, and probably foremost, you need to have both your Drupal 6/7 and Drupal 8 sites - the former full of all your valuable content, and the latter empty of everything but structure.

An important note: though the completed migration will be run on your production server, you should be using development environments for this work. At Phase2, we use Outrigger to simplify and standardize our dev and production environments.

For migration purposes, we only actually need the Drupal 7 site’s database itself, in a place that’s accessible to the destination site.  I usually take an SQL dump from production, and install it as an additional database on the same server as the destination, to avoid network latency and complicated authentication requirements. Obviously, unless you freeze content for the duration of the migration development, you’ll have to repeat this process for final content migration on production.

I’d like to reiterate some advice from my last post: I strongly recommend sanitizing user accounts and email addresses on your development databases.  Use drush sql-sanitize and avoid any possibly embarrassing and unprofessional gaffes.

On your Drupal 8 site, you should already have completed the creation of the new content types, based on information you discovered and documented in your first steps.  This should also encompass the creation of taxonomy vocabularies, and any fields on your user entities.

In your Drupal 8 settings.php file, add a second database config array pointed at the Drupal 7 source database.

sites/default/settings.php




  1. $databases['migration_source_db']['default'] = array(
  2.   'database' => 'example_source',

  3. 'username' => 'username',

  4. 'password' => 'password',

  5. 'prefix' => '',

  6. 'host' => 'db',

  7. 'port' => '',

  8. 'namespace' => 'Drupal\Core\Database\Driver\mysql',

  9. 'driver' => 'mysql',

  10. );


Finally, you’ll need to add the migration module suite to your site.  The baseline for migrations is migrate, migrate_drupal, migrate_plus, and migrate_tools.  The Migrate and Migrate Drupal modules are core code. Migrate provides the basic functionality required to take content and put it into Drupal 8.  Migrate Drupal provides code that understands the structure of Drupal 6 and 7 content, and makes it much more straightforward to move content forward within the Drupal ecosystem.

Both Migrate Plus and Migrate Tools are contributed modules available at drupal.org. Migrate Plus, as the name implies, adds some new features, most importantly migration groups. Migrate Tools provides the drush integration we will use to run and rollback migrations.

Drupal 8 core code also provides migrate_drupal_ui, but I recommend against using it. By using Migrate Tools, we can make use of drush, which is more efficient, can be incorporated into shell scripts, and has more clear error messages.

Framing the House

We’ve done the planning and laid the foundations, so now it’s time to start building this house!

We start with a new, custom module.  This can be pretty bare-bones, to start with.

example_migrate/example_migrate.info.yml




  1. type: module

  2. name: 'Example Migrate'

  3. description: 'Example custom migrations'

  4. package: 'Example Migrate'

  5. core: '8.x'

  6. dependencies:

  7. - drupal:migrate

  8. - drupal:migrate_plus

  9. - drupal:migrate_tools

  10. - drupal:migrate_drupal


Within our module folder, we need a config/install directory. This is where all our config files will go.

Migration Groups

The first thing we should make is a general migration group. While it’s possible to put all the configuration into each and every migration you write, I’m a strong believer in DRY programming (Don’t Repeat Yourself).  Migrate Plus gives us the ability to put common configuration into a single file and use it for multiple migrations, so let’s take advantage of that power!

Note the filename we’re using here. This naming convention gives Migrate Plus the ability to find and parse this configuration, and marks it as a migration group.

example_migrate/config/install/migrate_plus.migration_group.example_general.yml




  1. # The machine name of the group, by which it is referenced in individual migrations.

  2. id: example_general

  3. # A human-friendly label for the group.

  4. label: General Imports

  5. # More information about the group.

  6. description: Common configuration for simple migrations.

  7. # Short description of the type of source, e.g. "Drupal 6" or "WordPress".

  8. source_type: Drupal 7 Site

  9. # Here we add any default configuration settings to be shared among all

  10. # migrations in the group.

  11. shared_configuration:

  12. source:

  13. key: migration_source_db
  14. # We add dependencies just to make sure everything we need will be available

  15. dependencies:

  16. enforced:

  17. module:

  18. - example_migrate

  19. - migrate_drupal

  20. - migrate_tools


This is a very simple group that will use for migrations of simple content . Most of the stuff in here is self-descriptive.  However, source is a critical config - it uses the key of the database configuration we added earlier, to give migrate access to that database.  We’ll examine a more complicated migration group another time.

User Migration

In Drupal, users pretty much have their fingers in every pie.  They are listed as authors on content, they are creators of files… you get the picture.  That’s why it’s usually the first migration to get run.

Note again the filename convention here, which allows Migrate Plus to find it, and marks it as a migration (as opposed to a group).

example_migrate/config/install/migrate_plus.migration.example_user.yml




  1. # Migration for user accounts.

  2. id: example_user

  3. label: User Migration

  4. migration_group: example_general

  5. source:

  6. plugin: d7_user

  7. destination:

  8. plugin: entity:user

  9. process:

  10. plugin: get

  11. status: status

  12. name:

  13. -

  14. plugin: get

  15. source: name

  16. -

  17. plugin: dedupe_entity

  18. entity_type: user

  19. field: name

  20. roles:

  21. plugin: static_map

  22. source: roles

  23. map:

  24. 2: authenticated

  25. 3: administrator

  26. 4: author

  27. 5: guest_author

  28. 6: content_approver

  29. created: created

  30. changed: changed

  31. migration_dependencies:

  32. required: { }

  33. dependencies:

  34. enforced:

  35. module:

  36. - example_migrate


Wow! There’s lots of stuff going on here.  Let’s try and break it down a bit.




  1. id: example_user

  2. label: User Migration

  3. migration_group: example_general


The id designation is a standard machine name for this migration.  We will call this with drush to run the migration. Label is a standard human-readable name.  The migration_group should be obvious - it connects this migration to the group we designed above, which means we are now importing all the config in there.  Notably, that connects us to the D7 database.




  1. source:

  2. plugin: d7_user

  3. destination:

  4. plugin: entity:user


Here are two key items.  The source plugin defines where we are getting our data, and what format it’s going to come in.  In this case, we are using Drupal core’s d7_user plugin.

The destination plugin defines what we’re making out of that data, and the format it ends up in.  In this case, we’re using Drupal core’s entity:user plugin.




  1. process:

  2. plugin: get

  3. status: status

  4. name:

  5. -

  6. plugin: get

  7. source: name

  8. -

  9. plugin: dedupe_entity

  10. entity_type: user

  11. field: name

  12. roles:

  13. plugin: static_map

  14. source: roles

  15. map:

  16. 2: authenticated

  17. 3: administrator

  18. 4: author

  19. 5: guest_author

  20. 6: content_approver

  21. created: created

  22. changed: changed


Now we get into the real meat of a migration - the Process section. Each field you’re going to migrate has to be defined here. They are keyed by their field machine name in Drupal 8.  

Each field assigns a plugin parameter, which defines the Process Plugin to use on the data. Each of these process plugins will take a source parameter, and then possibly others.  The source parameter defines the field in the data array provided by the source plugin.  (Yeah, like I’ve said before, naming things clearly isn’t Drupal’s strong suit).

Our first example is mail. Here we are assigning it the get process plugin. This is the easiest process to understand, as it literally takes the data from the old site and gives it to the new site without transforming it in any way. Since email addresses don’t have any formatting changes or necessary transformations, we just move them.

In fact, the get process plugin is Drupal’s default, and our next example shows a shortcut to use it. The status field is getting its data from the old status field. Since get is our default, we don’t even need to actually specify the plugin, and the source is simply implied. See the documentation on drupal.org for more detail.

Name is a slightly more complicated matter.  While usernames don’t change much in their format, we want to make absolutely sure that they are unique.  This leads us to Plugin Chaining, an interesting option that allows us to pass data from one plugin to another, before saving it. The YML array syntax, as demonstrated above, allows us to define more than one plugin for a single field.

We start off by defining the get plugin, which just gets the data from a source field. (You can’t use the default shortcut when you’re chaining, incidentally.)

We then pass it off to the next plugin in the chain, dedupe_entity. This plugin ensures that each record is absolutely certain to be unique.  It has the additional parameters entity_type and field. These define the entity type to check against for uniqueness, and the field in which to look on that entity. See the documentation for more detail.

Note that this usage of dedupe_entity does not specify a source parameter.  That’s because plugin chaining hands off the data from the first plugin in line to the next, becoming, in effect, the source.  It’s very similar to method chaining in jQuery or OOP PHP.  You can chain together as many process plugins as you need, though if you start getting up above four it might be time to re-evaluate what you’re doing, and possibly write a custom processor.

Our final example to examine is roles. User roles in Drupal 7 were keyed numerically, but in Drupal 8 they are based on machine names.  The static_map plugin takes the old numbers, and assigns them to a machine name, which becomes the new value.

The last two process items are changed and created. Like status, they are using the get process plugin, and being designated in the shortcut default syntax.




  1. migration_dependencies:

  2. required: { }

  3. dependencies:

  4. enforced:

  5. module:

  6. - example_migrate


The last two configs are pretty straightforward.  Migration Dependencies are used when a migration requires data from other migrations (we’ll get into that more another time). Dependencies are used when a migration requires a specific additional module to be enabled. In my opinion it’s pretty redundant with the dependencies declared in the module itself, so I don’t use it much.

In the next post, we’ll cover taxonomy migrations and simple node migrations. We’ll also share a really useful tool for migration development.  Thanks for reading!

Jan 25 2018
Jan 25

If you are considering a move to Drupal 8, or upgrading your current Drupal platform, it’s likely that you’ve come across the term “decoupled Drupal”, aka “headless Drupal”. But do you know what it means and what the implications of decoupled Drupal are for marketers? In this guide we will define decoupled Drupal and share three reasons why marketers should consider a decoupled architecture as they evolve their digital experience platforms.

What is Decoupled Drupal?

Think about your favorite department store. You walk in and enjoy the ambiance, interact with the items across departments, maybe chat with an employee, but you never venture into the back of the store. The back of the store exists to house items that employees can access and feature in the departments for customers to see.

With decoupled Drupal, a website visitor will not interact with Drupal directly, much like shoppers do not interact with the back of a brick and mortar store. The visitor will see pages created with Javascript frameworks (like Angular.js, React.js, or Ember), rather than a traditional Drupal theme. The Drupal CMS serves as an interface for editors to enter content, but its primary use is as a store for content.

To put it very simply, decoupled Drupal separates your front end experiences from the back end CMS. Here’s an image to help you visualize the difference between a more traditional Drupal architecture and a decoupled Drupal setup.

image of decoupled Drupal architecture vs. more traditional Drupal setup

If you would like to dive in further, here is a great blog by Dries Buytaert on decoupled Drupal architecture in 2018.

Why Should Marketers Consider Decoupled Drupal?

Make Multi-Platform a Breeze

If you are a large organization with many digital properties to maintain and update, then a decoupled Drupal backend can make your life a lot easier. By serving as content repository, the decoupled CMS allows you to serve up dynamic content across many different places, including mobile apps, voice tech platforms, IOT devices, and to future tech down the road.

Create Beautiful Front End Experiences

It’s no secret that traditional Drupal architectures come with some design limitations that can prevent designers and front-end developers from properly implementing a modern design system that offers exceptional user experience.

Decoupled Drupal facilitates the use of external design systems. In this approach, Drupal is only responsible for gathering data, passing that data to an external design system, and handing over control of the markup to that system, ensuring that your content will present beautifully across all of your digital platforms.

Boost Marketing Agility to Provide Superior Customer Experience

Updating and redesigning digital properties quickly and with customer expectations in mind is a huge and never-ending challenge for marketers, not to mention a huge investment of time and resources across development, design, and marketing departments.

Updates and redesigns within a traditional Drupal architecture typically take quite some time because both the back end and the front end must be modified, meaning that you the marketer are relying on both developers and designers to complete the project. CX is evolving so fast that by the time you wrangle your development team, bring in design, and agree on the way forward you may find that your proposed changes already look dated!

Decoupling your Drupal CMS allows you to make upgrades to the back end without impacting UX on the frontend. And in turn, you can make design and UX changes to the front end independently from the back end.

In a decoupled architecture you keep the CMS as a long-term product on the back end, but can make important front end UX changes that impact customer acquisition and retention more frequently and more cheaply.

Decoupled Drupal is not for everyone. If you only need to manage content for your company’s website and do not maintain multiple digital properties, a more traditional Drupal CMS architecture probably makes more sense for you. It’s time to consider decoupled Drupal if you are a large organization with several uses for the same content, such as multiple sites displaying the same content or a stable of various front-end devices.

If you would like to further discuss if decoupled Drupal is right for your business, reach out to us here. And check out this whitepaper for a deeper dive into all of the Drupal 8 architecture options.

Jan 18 2018
Jan 18

In my last post,  we discussed why marketers might want to migrate their content to Drupal 8, and the strategy and planning required to get started. The spreadsheet we shared with you in that post is the foundation of a good migration, and it usually takes a couple sprints of research, discussion, and documentation to compile.  It’s also a process that’s applicable to all migration of content, no matter the source or destination framework.

In this post, we will talk about what’s required from your internal teams to actually pull off a content migration to Drupal 8. In later posts, we’ll cover the actual technical details of making the migration happen.

Migration: A Definition

It’s probably worth taking some time here to clarify what, exactly, we’re talking about when we say ‘migration’. In this context, a migration is a (usually automated) transferring of existing content from an old web site to a new one. This also usually implies a systems upgrade, from an outdated version of your content management system to a current version.  In these exercises, we’re assuming that you’re moving from Drupal 6 or 7 to Drupal 8.

What kind of team is required?

There are several phases of migration, each of which requires a different skill set.  The first step is outlined in detail in my last post. The analysis done here is a joint effort, generally requiring input from a project manager and/or analyst, a marketing manager, and a developer.  

The project manager and analyst should be well versed in information architecture and content strategy (there is some great information on this topic at usability.gov). Further, it is really helpful if they have an understanding of the capabilities of the source and target systems, as this often informs what content is transferable, and how.

It’s also helpful if your team has a handle on the site’s traffic and usage. This usually falls to a senior content editor or marketing manager.  Also important is that they have the ability to decide what content is worth migrating, and in what form.

In the documentation phase of migration, the developer often has limited input, as this is the least-technical phase of the whole process. However, they should definitely have some level of oversight on the decisions being made, just to ensure technical feasibility.  That requires a good thorough understanding of the capabilities of the source and target systems.

One of the parties should also have the ability to make and export content types and fields. You can see Mike Potter’s excellent Guide to Configuration Management for more information on that.

Once development on the migration begins, it mostly becomes a developer task. Migrations are a really great mentoring opportunity (We’re really big on this at Phase2).  

Finally, someone on the team also needs the ability to setup the source and target databases and files for use in all the environments (development, testing, production).

Estimation

“How long will all this take?”  We hear this a lot.  And, of course, there’s no one set answer. Migration is a complicated task with a lot of testing and a lot of patience required. It’s pretty difficult to pin down, but here are some (really, really rough) guidelines for you to start from. Many of the tasks below may sound unfamiliar; they will be covered in detail in later posts.

 

Node/User/Taxonomy migrations

1-5 content types

6-10 content types

11+ content types

Initial analysis (“the spreadsheet”)

16-24 hours

32-40 hours

48-56 hours

Content type creation & export

16-40 hours

40-80 hours

8 hours/type

Configuration Grouping

16-24 hours

24-40 hours

24-40 hours

Content migrations

16-40 hours

32-56 hours

8 hours/type

Testing

24-32 hours

40-56 hours

8 hours/type

Additional Migrations

 

Files & media migration

32-56 hours

Other entity types

16-40 hour per entity type

Migrations from non-Drupal sources

16-40 hour per source type

The numbers here are in “averaged person-hours” format - this would be what it would take for a single experienced developer to accomplish these tasks. Again, remember that these are really rough numbers and your mileage will vary.

You might note, reading the numbers closely, that most of the tasks are ‘front-loaded’.  Migration is definitely a case where the heavy work happens at the start, to get things established.  Adding additional content types becomes simpler with time - fields are often reused, or at least similar enough to each other to allow for some overlap of code and configuration.

Finally, these numbers are also based on content types of "average" complexity. By this I mean, somewhere between 5 and 15 un-customized content fields.  Content types with substantially more fields, or with fields that require a lot of handling on the data, will expand the complexity of the migration.  More complexity means more time.  This is an area where it's hard to provide any specific numbers even as a guideline, but your migration planning spreadsheet will likely give you an idea of how much extra work is necessary.  Use your best judgement and don't be afraid to give yourself some wiggle room in the overall total to cover these special cases.

Security and Safety Considerations

As with all web development, a key consideration in migrating content is security. The good news is that migration is usually a one-time occurence.  Once it’s done, all the modules and custom code you’ve written are disabled, so they don’t typically present any security holes. As long as your development and database servers are set up to industry standard, migration doesn’t present any additional challenges in and of itself.

That said, it’s important to remember that you are likely to be working with extremely sensitive data - user data almost always contains PII (Personally Identifiable Information). It is therefore important to make sure that user data - in the form of database dumps, xml files, or other stores - does not get passed around in emails or other unsecure formats.

Depending on your business, you may also have the same concerns with actual content, or with image and video files. Be sensible, take proper precautions.  And make sure that your git repository is not public.

I also strongly recommend sanitizing user accounts and email addresses on your development databases.  There’s no feeling quite like accidentally sending a few thousand dummy emails to your unsuspecting and confused customers.  Use drush sql-sanitize and avoid any possibly embarrassing and unprofessional gaffes.

What’s next?

Well, we’ve covered all the project management aspects of migration - next up is some tech talk!  Stay tuned for my next post, which will cover the foundations of developing a migration.

Jan 11 2018
Jan 11

With exponential growth in marketing tools and website builders, why are marketers still adopting Drupal and maintaining their existing Drupal systems? And how has Drupal evolved to become a crucial piece of leading brands’ martech ecosystems?

For marketing decision makers, there are many reasons to choose and stick with Drupal, including:  

  • Designed to integrate with other marketing tools

  • Increased administrative efficiencies

  • Flexible front-end design options

  • Reduced Costs

Plays Well With Others

Your customer experience  no longer just depends on your CMS. Your CMS must integrate with new technologies and channels, as well as your CRM and marketing automation tools, to perform as a cohesive digital experience platform that reaches customers where they are.

Drupal is the most flexible CMS available when it comes to third-party integrations. Along with the power of APIs, Drupal can help you outfit your digital presence with the latest emerging tech more quickly than your competitors, allowing you to deliver an unparalleled customer experience.

Check out how Workday integrated disparate systems and tools, such as Salesforce and Mulesoft, to create a seamless experience that serves both their customer community members and internal support teams.

INCREASED ADMINISTRATIVE EFFICIENCIES

In large organizations, interdepartmental collaboration obstacles often translate into inefficient content publishing practices. This is even more compounded when marketers and content editors need a developer in the loop to help them make changes. When these hurdles aren’t properly navigated, prospects and customers suffer by not being able to gain easy access to the most relevant and up-to-date product or service information.

Over the years, Drupal has evolved to be flexible and accommodating for non-technical content admins, providing a highly customizable and user-friendly administration dashboard and flexible user privileges. Drupal powers marketing teams to design content independent of developers with modules like Paragraphs, which lets content admins rearrange page layouts without code adjustments while enforcing consistency across company sites.

Flexible Front-End Design Options

Drupal 8 provides increased design flexibility by letting the front and back end architectures work as separate independent systems. Therefore the visual design of a website can be completely rebuilt without having to invest in any back-end architecture changes.

While this may seem a bit technical and in the weeds, this has significant benefits for marketing resources and budget! With this design flexibility, marketers can implement new designs faster and more frequently, empowering your team to test and iterate on UX to optimize the customer experience.

REDUCED COSTS

The number of marketing tools required to run a comprehensive omnichannel marketing strategy is only growing. We add tools to our martech stack to help us grow our reach, understand our customers better, and personalize customer engagement. Each one of these tools has its own associated package cost or service agreement.

As an open source platform Drupal does not incur any licensing costs. While in contrast, a large implementation can easily cost hundreds of thousands of dollars just to have the right to use proprietary software, Drupal’s community-developed software is free, saving companies millions.

Drupal is also fully customizable from the get go--not only when it comes to features and what site visitors see, but also with regard to editor tools, workflows, user roles and permissions, and more. This means the money that would go towards customization projects is freed up to benefit customers.

Digital marketing managers considering Drupal, or those contemplating a migration to Drupal 8, should consider these benefits and how Drupal is helping digital marketers evolve to provide a more agile and user-friendly digital experience for prospects and customers.

Strongly considering a move to Drupal or a migration to Drupal 8? Reach out to us with any questions. And in the meantime check out Drupal 8 Content Migration: A Guide for Marketers.

Nov 20 2017
Nov 20

This year’s BADCamp DevOps summit featured a strong showing on the topic of containers. The program included many Docker-centric topics, and many sessions otherwise not container-centric. The summit showcased a lively interest in how new ideas, tools, or services relate to containers.

I strongly agreed with the keynote by Michelle Krejci arguing for containers as the next natural step in the commoditization of infrastructure. The Docker-driven Development panel in the afternoon, featured maintainers of 5 different tools aimed at facilitating local development with Docker. Naturally we represented Outrigger.

Coming out of the panel we were excited to learn the many ways in which our core technical decisions align with other Docker tools in the Drupal ecosystem, as well as the many ways Outrigger’s particular developer experience and learning focus marks it as a little different.

Thanks to Alec for organizing and Rob for moderating.

Here is a recap of the Outrigger answers to various questions put to the panel.

How did your project get started? What need did it initially cover?

Outrigger got started in mid-2014 as set of BASH scripts to facilitate setting up local, Docker-based environments for developers that didn’t know about Docker or containers, but expected their Drupal content to persist across work sessions and nice, project-contextual URLs (instead of “localhost”).

We wanted a tool to allow our teams to easily jump between projects without running a bunch of heavy VMs or needing to juggle environment requirements.

It has since evolved into a Golang-based Docker management and containerized task-running tool, with a library of Docker images, a set of Dockerization conventions shipped for Drupal by a code generator, and of course a website, all spanning 20+ Github repositories.

How do you deal with the complexity of Docker?  Do you expose how containers are configured and operate or do you do something to ease the learning curve?

Outrigger brokers how Docker is managed on OSX, Linux, and Windows. We work really hard to minimize the time for a developer to onboard to a project, and the ease in running the most common operations any project might need to run without regard for the technologies involved.

That gives us the breathing space to directly leverage fairly standard Docker configuration, especially configurations for docker-compose. This allows us to include that configuration as part of the project code repository. We want to make it easy for someone to look at and understand what is going on under the covers so that they can learn more when they are ready

Common operations, presented as “project scripts”, are configured in an outrigger.yml file at the project root and are easily inspected. They are chains of BASH commands, usually using docker or docker-compose to execute Drush, Drupal Console, BLT, Composer, Grunt, Gulp, npm, yarn, webpack, and all the other tools inside containers.

Outrigger’s emphasis is on developer experience conventions and utilities to promote project team consistency first, with Docker hosting & management being a secondary concern.

Could you scale a local environment built on your project to a production use case? If so, how?

Outrigger is not just a tool, it’s also a set of conventions and a library of Docker images. We support local development and continuous integration primarily, but leveraging a project based on Outrigger in production would simply need to publish an application-specific image.

We’ve used Docker in production this way, and also in hybrid approaches such as using the Docker infrastructure as part of a release system shipping to otherwise traditional servers.

Our current research is how to more naturally support Kubernetes for all non-local environments

How are you solving the slow filesystem issue (Docker 4 Mac specific)? Do you see your approach changing in the future?

We use docker-machine instead of Docker4Mac primarily because Docker4Mac performance has traditionally been very poor and their networking and routing support is similarly bad.

We initially took the NFS route with docker-machine for shared files and still found that didn’t meet reasonable performance targets for our typical builds. NFS can be really bad when you have lots of really small files. In some cases, we had builds take 20 minutes instead of 4 minutes, which can be really bad.

We’ve since switched to a Unison-based approach to get the best of both worlds in terms of local IDE performance and container performance. In our measuring, it’s as fast as the virtual machine can be and we’ve seen close-enough to native performance that it’s a non-issue for now. Our Unison-based approach also has the benefit of supporting filesystem event notifications, making watches that run inside containers a reality.  It even has a similar level of support overhead to NFS in terms of helping less ops-centric developers continue to work smoothly.  We still use the ease of NFS for operations that don’t require high performance or in-container filesystem event notification.

If Docker4Mac addressed all our performance and development experience concerns we would probably switch to extending that as a common core product. However, beyond file system performance it doesn’t seem like they have some of the other network routing and DNS issues that Outrigger is focused on solving.

Are there any other platform-specific pain points you’ve seen?

Finding someone willing to test on Windows and help us find Windows-savvy solutions to DNS routing has been a challenge. We’re mostly a macOS and Linux shop.

How would you handle integration with arbitrary third party tools that are not built into your project yet? E.g., Someone wants to use Elastic Search or some crazy Node frontend. How would you wire that in?

We support anything that can run in a Docker container. This can be entirely driven from an individual application as:

  1. Find or build a Docker image, preferably with s6 for init scripts and signal handling, and confd support for commonly configured options.

  2. Wire that into the project’s docker-compose configuration with

    • Volumes (or bind mounts that assume a /data directory) to persist data

    • Environment config file overrides added via custom Dockerfile or bind mounts in the project’s ./env directory.

    • Labels for operating services that should be web accessible so DNSDock can provide DNS resolution for friendly URLs

Outrigger is very open on matters of image structure, most of the details are usage conventions or opting-in to functionality.

What is your project’s current relationship with Drupal?  Would you say you’re “Married”, “It’s complicated”, or “Just friends”?

Our most commonly used Docker images are fine-tuned to support Drupal or tools common to the Drupal ecosystem. Our most sophisticated project conventions are worked out with Drupal, in the form of our Yeoman-based Outrigger Drupal project generator.

We deliberately wanted Outrigger to be flexible enough to facilitate good developer experiences regardless of technology, so I would say “Just Friends”.

What’s the biggest missing piece (or potential opportunity) for local development stacks these days?

Continuity with production is the holy grail of local development, and it’s very close.

Supporting execution of tasks on the local environment that may not always need to be in containers. Or if they are in containers, may be complex enough that asking developers to memorize docker-compose commands is still complicated. To that end we’ve created a task runner in the Outrigger CLI meant as a companion to docker-compose and any tools run inside the containers.

If a genie came out of a bottle and granted you one wish for the next Docker release OTHER THAN a faster filesystem on OSX, what would you wish for?

I think the greatest thing Docker4Mac could do is expose network routing to the underlying docker bridge network.  Allowing for direct routing to containers within the macOS hypervisor would remove the need for all containers to share the localhost port space.  This would facilitate launching, for example, multiple database containers and connecting to each of them on the same port but at different domain names.  The networking limitations of Docker4Mac are a big obstacle to allowing for an enhanced developer experience, and power user capabilities.

Docker natively supporting Kubernetes instead of just Docker Swarm (It’s happening!)

Click here to learn more about why you should add Outrigger to your development toolkit, and be on the lookout for our upcoming blog on Outrigger version 2.0.0.

Nov 09 2017
Nov 09

A growing number of healthcare organizations have chosen to build their digital presence with Drupal, including Memorial Sloan Kettering, and New York State’s largest healthcare provider, Northwell Health. And healthcare continues to adopt Drupal as the content management system of choice, with many healthcare systems embracing the benefits of migrating from older Drupal platforms to Drupal 8.  

What is it about Drupal that keeps leading healthcare institutions committed to the platform? And how has Drupal evolved to help healthcare organizations better serve their patients and create a secure, user-friendly digital experience?

For digital healthcare decision makers, there are many reasons to choose and stick with Drupal, including:  

  • Best-in-class security

  • Centralized multi-site management

  • Built for third-party integrations

  • Increased administrative efficiencies  and consistent UX

  • Improved accessibility

Let’s look at how these Drupal capabilities are helping digital healthcare evolve today.

Best-In-Class Security

While healthcare organizations may have baulked at using open source solutions initially due to security and patient privacy concerns, adoption of Drupal by leading medical facilities like the Children's Hospital Philadelphia and Duke Health have extinguished myths around open source security. 

Drupal’s collaborative, open source development model gives it an edge when it comes to security. Throngs of Drupal developers around the globe ensure a constant process of testing, reviews, and alerts which ensure detection and eradication of potential security vulnerabilities.

Since thousands of developers dedicate their time and talents to finding and fixing security issues, Drupal can respond very quickly when problems are found. With Drupal 8, there are even more ways the Drupal community has taken action to make this software secure and evolve to respond to new types of attacks.

Centralized Multi-Site Management

Health systems often encompass many healthcare brands, each of which requires its own digital presence, content, and site functionality. Creating and managing a centralized, consistent experience for patients across health providers and devices can be tricky.

A multisite platform built with Drupal enables healthcare systems to run all of their sites off of a single codebase, providing better consistency, streamlined maintenance, and facilitating easier content sharing between sites, while empowering healthcare facilities with flexible functionality for their specific needs. Editors from one centralized office can easily publish and push content to multiple sites.

Editors can also quickly create new microsites without seeking developer assistance. This gives them greater agility in posting timely, relevant content for their patients across many different digital spaces.

Built for Third-party Integrations

Healthcare tools, tech, and software are experiencing explosive growth as new patient communication channels like chatbots, voice technology, and AI emerge. The digital patient experience no longer just lives on your CMS.

Your CMS must integrate with new technologies and channels, as well as your CRM and marketing automation tools, to perform as a cohesive digital experience platform that reaches patients where they are.

Drupal is the most flexible CMS available when it comes to third-party integrations. Along with the power of APIs, Drupal can help you outfit your digital presence with the latest emerging tech more quickly than your competitors, allowing you to deliver an unparalleled patient experience.

Increased Administrative Efficiencies and Consistent User Experience

In large, consolidated medical ecosystems, interoffice and interdepartmental collaboration obstacles often translate into inefficient content publishing practices. This is even more compounded when content editors need a developer in the loop to help them make changes. When these hurdles aren’t properly navigated, patients suffer by not being able to gain easy access to the most relevant and up-to-date information.

Over the years, Drupal has evolved to be more flexible and accommodating for non-technical content admins, providing a highly customizable and user-friendly administration dashboard. Drupal empowers healthcare content admins to design content independent of developers with modules like Paragraphs, which lets content admins rearrange page layouts without code adjustments while enforcing consistency across agency sites.

Check out how Phase2 helped Memorial Sloan Kettering create a consistent user experience for over 130,000 patients with Drupal 8.

Improved Web Accessibility

To effectively serve their patients, healthcare websites must be accessible to an extremely large and diverse audience. This audience often requires accommodations for physical disabilities, and needs to be able to access information across an array of devices, and in multiple languages. With its diverse, worldwide community of contributors, Drupal facilitates meeting accessibility needs on a number of fronts.

Flexible and fully customizable theming makes it possible for Drupal sites to meet Section 508 and WCAG accessibility requirements. Responsive base themes are readily available to give themers a strong foundation for ensuring compatibility with a wide range of access devices. And internationalization is at the cornerstone of Drupal 8 to provide multilingual functionality.

These accessibility features are helping healthcare systems create a user-friendly experience for everyone, and ultimately pushing digital healthcare to follow user-centric design best practices.

Healthcare professionals considering Drupal, or a migration to Drupal 8, should consider these benefits and how leveraging Drupal for healthcare systems can evolve digital experiences to be seamless, intuitive, and accessible for both the patient and internal marketing and IT teams.

To learn more about evolving the digital patient experience with Drupal, listen to this podcast with Phase2 and our client, Northwell Health.

Nov 07 2017
Nov 07

If you’re a marketer considering a move from Drupal 7 to Drupal 8, it’s important to understand the implications of content migration. You’ve worked hard to create a stable of content that speaks to your audience and achieves business goals, and it’s crucial that the migration of all this content does not disrupt your site’s user experience or alienate your visitors.  

Content migrations are, in all honesty, fickle, challenging, and labor-intensive. The code that’s produced for migration is used once and discarded; the documentation to support them is generally never seen again after they’re done. So what’s the value in doing it at all?

Your data is important (Especially for SEO!) 

No matter what platform you’re working to migrate, your data is important. You’ve invested lots of time, money, and effort into producing content that speaks to your organization’s business needs.

Migrating your content smoothly and efficiently is crucial for your site’s SEO ranking. If you fail to migrate highly trafficked content or to ensure that existing links direct readers to your content’s new home you will see visitor numbers plummet. Once you fall behind in SEO, it’s difficult to climb back up to a top spot, so taking content migration seriously from the get go is vital for your business’ visibility.

Also, if you work in healthcare or government, some or all of your content may be legally mandated to be both publically available, and letter-for-letter accurate. You may also have to go through lengthy (read: expensive) legal reviews for every word of content on your sites to ensure compliance with an assortment of legal standards – HIPAA, Section 508 and WCAG accessibility, copyright and patent review, and more.  

Some industries also mandate access to content and services for people with Limited English Proficiency, which usually involves an additional level of editorial content review (See https://www.lep.gov/ for resources).  

At media organizations, it’s pretty simple – their content is their business!

In short, your content is a business investment – one that should be leveraged.

So Where do I start with a Drupal 8 migration?

Like with anything, you start at the beginning. In this case that’s choosing the right digital technology partner to help you with your migration. Here’s a handy guide to help you choose the right vendor and start your relationship off on the right foot.

Once you choose your digital partner content migration should start at the very beginning of the engagement. Content migration is one of the building blocks of a good platform transition. It’s not something that can be left for later – trust us on this one. It’s complicated, takes a lot of developer hours, and typically affects your both content strategy and your design.

Done properly, the planning stages begin in the discovery phase of the project with your technology vendor, and work on migration usually continues well into the development phase, with an additional last-sprint push to get all the latest content moved over.

While there are lots of factors to consider, they boil down to two questions: What content are we migrating, and how are we doing it?

Which Content to Migrate

You may want to transition all of your content, but this is an area that does bear some discussion. We usually recommend a thorough content audit before embarking on any migration adventure. You can learn more about website content audits here. Since most migration happens at a code & database level, it’s possible to filter by virtually any facet of the content you like. The most common in our experience are date of creation, type of content, and categorization.

While it might be tempting to cut off your site’s content to the most recent few articles, Chris Anderson’s 2004 Wired article, “The Long Tail” (https://www.wired.com/2004/10/tail/) observes that a number of business models make good use of old, infrequently used content. The value of the Long Tail to your business is most certainly something that’s worth considering.

Obviously, the type of content to be migrated is pretty important as well. Most content management systems differentiate between different ‘content types’, each with their own uses and value.  A good thorough analysis of the content model, and the uses to which each of these types has been and will be used, is invaluable here.  There are actually two reasons for that.  First, the analysis can be used to determine what content will be migrated, and how.  Later, this analysis serves as the basis of the creation of those ‘content types’ in the destination site.

A typical analysis takes place in a spreadsheet (yay, spreadsheets!). Our planning sheet has multiple tabs but the critical one in the early stages is Content Types.

 

content types planning sheet

Here you see some key fields: Count, Migration, and Field Mapping Status.

Count is the number of items of each content type. This is often used to determine if it’s more trouble than it’s worth to do an automated content migration, as opposed to a simple cut & paste job. As a very general guideline, if there are more than 50 items of content in a content type, then that content should probably be migrated with automation. Of course, the amount of fields in a content type can sway that as well. Once this determination is made, that info is stored in the Migration field.

The Field Mapping Status Column is a status column for the use of developers, and reflects the current efforts to create the new content types, with all their fields.  It’s a summary of the Content Type Specific tabs in the spreadsheet. More detail on this is below.

Ultimately, the question of what content to migrate is a business question that should be answered in close consultation with your stakeholders.  Like all such conversations, this will be most productive if your decisions are made based on hard data.

How do we do it?

This is, of course, an enormous question. Once you’ve decided what content you are going to migrate, you begin by taking stock of the content types you are dealing with. That’s where the next tabs in the spreadsheet come in.

The first one you should tackle is the Global Field Mappings. Most content management systems define a set of default fields that are attached to all content types. In Drupal, for example, this includes title, created, updated, status, and body. Rather than waste effort documenting these on every content type, document them once and, through the magic of spreadsheet functions, print them out on the Content Type tabs.

 

global field mappings

Generally, you want to note Name, Machine Name, Field Type, and any additional Requirements or Notes on implementation on these spreadsheets.

It’s worth noting here that there are decisions to be made about what fields to migrate, just as you made decisions about what content types.  Some data will simply be irrelevant or redundant in the new system, and may safely be ignored.

 

migration planning sheet

In addition to content types, you also want to document any supporting data – most likely users and any categorization or taxonomy. For a smooth migration, you usually want to actually start the development with them.

The last step we’ll cover in this post is content type creation. Having analyzed the structure of the data in the old system, it’s time to begin to recreate that structure in the new platform. For Drupal, this means creating new content type bundles, and making choices about the field types. New platforms, or new versions of platforms, often bring changes to field types, and some content will have to be adapted into new containers along the way.  We’ll cover all that in a later post.

Now, many systems have the ability to migrate content types, in addition to content. Personally, I recommend against using this capability. Unless your content model is extremely simple, the changes to a content type’s fields are usually pretty significant. You’re better off putting in some labor up front than trying to clean up a computer’s mess later.

In our next post, we’ll address the foundations of Drupal content migrations – Migration Groups, and Taxonomy and User Migrations. Stay tuned!

Oct 30 2017
Oct 30

When it comes to digital interactions, today’s users have become quite demanding. Whether they’re using a touchscreen or desktop, a native app or social media platform, they expect a continuous, unified experience, made up of seamless interactions - one that syncs with their offline journey as well.We call these places of interaction touchpoints, and customers reach them via channels. In the past, brick-and-mortar stores used a single channel - a physical location - as a touchpoint to interface with customers.

Today, of course, brands are accessible via multiple channels, including websites, social media, and more. This approach, while effective for reaching wider audiences, opens the door for inconsistencies across touchpoints.

Screen Shot 2016-03-01 at 1.22.32 PM

Touchpoints can easily become gaps. For instance, if product information is readily available on a store’s website but missing from its mobile app, users may become frustrated by the disparity. Users have come to expect a certain degree of contextual awareness, as well - why doesn’t the app know that I’m in the store and adjust accordingly? Why is there a gap in my experience? Unfortunately, gaps in the digital experience quickly lead to dissatisfaction with the brand as a whole and can cause consumer flight. What’s a business to do?

Enter Omni-Channel

Omni-channel is just a fancy way to describe a unified experience across multiple touchpoints, consistent on multiple devices. Martha Stewart is one of the most prominent examples of how to do omni-channel well. Whether you are shopping on marthastewart.com, visiting her store, reading her cookbooks, browsing her blog, or receiving her email newsletter, both the brand experience and the content are congruous and responsive to your device.

placeit (39)

JustFab is another brand that does this well by allowing customers to make purchases within the external apps like Instagram. Even though the transaction is happening on a completely different platform, the user experience is so coherent that the consumer doesn’t care that they are not actually making the purchase on the company’s website.

Technological advances - like the advent of Drupal 8 - make it easier for organizations to support content everywhere and anywhere. But truly implementing omni-channel requires a thoughtful digital strategy. You must consider the user journey, what digital touchpoints effortlessly enable this journey, and how these touchpoints support your own content strategy.

Omni-Channel Content Strategy: A Quick Tutorial

First, consider your own goals. What are you trying to achieve? Whether your objective is to sell a product, promote an event, or attract brand ambassadors, begin by identifying how your current content strategy supports - or detracts from - these goals.

Next, consider your audience. In interacting with you, what are their objectives? What is your unique value proposition to them, and how can you ensure they are aware of it? What do they gain from your organization in general, and each touchpoint in particular? Does the interaction engage them and pique their interest… or irritate them? Then comes the user journey, aka the series of steps a potential customer moves through as he/she encounters your brand, explores your content, and hopefully makes a purchase (or whatever your ultimate objective is). Map out the ideal user journey for an ideal hypothetical customer. Which steps happen online? Which steps happen offline? Does the entire journey happen in one digital location, like your website, or do users bounce between various platforms? How can you build in contextual awareness to continually delight your users by thinking of their needs ahead of time? It is crucial to make the transitions between these touchpoints seamless - after all, continuity of experience for your users is what makes the difference between multi-channel and omni-channel.

How Does Drupal 8 Support Omni-Channel?

Now that you have a better grasp on your omni-channel content strategy, it’s time to start making technology decisions. Drupal has always been the go-to CMS for publishing content across multiple channels, and Drupal 8 continues in this tradition with several new features baked into core.

Web Services: Integration with APIs

Drupal 8 was designed to publish and consume content through APIs as a core feature. As a result, organizations can use Drupal as a central content repository, leveraging Drupal’s rich content structuring, creation, and management features. Drupal 8 essentially acts as a centralized hub, serving content to a variety of channels.

Screen Shot 2016-03-04 at 7.25.35 AM

 

 

Personalization, Translation, and Localization

While each individual should have a consistent experience, your digital experience in general can - and indeed, should - vary from person to person. Tailor content for each and every user, depending on his/her preferences, history of interaction with the brand, location, and (obviously) language.Drupal taxonomy terms can increase engagement on your website by targeting content to users. More advanced segmentation can also be achieved by using personalization modules such as the Web Engagement Management module.In addition, multilingual capabilities are included in Drupal 8 core, making it easy to tailor users’ experience based on their locations. Several improvements in core make this possible:

  1. Native installation is available in 94 languages
  2. You can now assign language to everything
  3. Drupal 8 includes automated language and user interface translations downloads and updates
  4. Local translations are protected
  5. Field-level translation applies to all content and integrates with Views
  6. The built-in translation interface works for all configurations

Responsiveness Out-of-the-Box

This one is fairly obvious - omni-channel content should be accessible from any device. Drupal 8 was designed using a mobile-first strategy; all of the built-in starter themes are responsive. Its responsive design targets the viewport size and either scales the content and layout for the available real estate, or uses a new or modified layout, defined as a breakpoint. Drupal core comes with two modules that enable responsive behavior: Breakpoint and Responsive Image. This means that the ability to display your content appropriately on a variety of devices isn’t something you have to bolt on after the fact; it’s a core part of the Drupal framework.

placeit (38)

Summing It Up

We all interact with content in a variety of formats and mediums. Successful organizations have a strategy in place to take advantage of those channels and touchpoints best suited to their business goals, customers’ preferences, and technology capabilities. Drupal 8 is the latest and greatest means to this end!

Want to learn more about planning your digital roadmap to be omnichannel ready? Read more!

Oct 12 2017
Oct 12

Are you a digital marketer considering Drupal 8? This blog is for you.

As marketers we have a front row seat to the rapid evolution of the digital landscape and changing customer expectations. We’re living in a post-browser world, and we’re swiftly moving into the age of full blown digital experience ecosystems.

Customer expectations include a seamless experience and interaction with a wide range of touchpoints including mobile, wearables, IOT, and chatbots. An investment in a flexible system like Drupal 8 will help you deliver a customer experience that will set you apart from your competitors.

With the swift digital evolution and changing customer expectations come some compelling reasons for marketers to champion the investment in Drupal 8.

A few examples include an API-friendly design that lets you integrate with your existing Sass tools, and increased content publishing efficiencies that allow marketers to quickly update and organize content across a growing number of touchpoints without developer assistance.

screenshot of Drupal 8 webinar

Watch this webinar to learn more about the marketing benefits of migrating to Drupal 8, and how to navigate the Drupal 8 decision.

You’ll learn:

  • Why organizations are moving to Drupal 8

  • How Drupal 8 can support your marketing and customer engagement strategies

  • When it's the right time and circumstance to make the move

  • What to consider for a successful migration plan

Watch the webinar here.

Oct 04 2017
Oct 04

So often in the enterprise software market, we see that one of the most pressing challenges companies face is scaling their online community support operations to match their explosive user growth.

A common factor in scaling success is always an emphasis on optimizing the user experience, design, and underlying technical architecture so that the entire digital support ecosystem is intuitive, fast, and consistent across touchpoints. And we are not talking about your traditional intranet portal. We’re talking about a digital support experience that is accessed from any device and provides a seamless brand experience across online and offline touchpoints.

A perfect example of this kind of community support  success can be found in the updated platform of our client, Workday. The Workday customer and partner community has grown to over 70,000 members in past two years - a 60% increase. In light of their rapid growth and evolving business needs, Workday worked with Phase2 to create an engaging community platform that supports and educates their customer community.

The updated platform built on Drupal 8, features topic forums,  product release information, quickly discover valuable resources, direct customer support access, and a custom “Brainstorm” feature so the customer community is empowered to influence Workday’s product roadmap.

Workday is an example of an ambitious organization that is leveraging open source to build powerful solutions with no licensing fees, the power of the Drupal development community, and a secure yet flexible core software that will scale with their business needs.

Learn more about the Workday project here.

Sep 07 2017
Sep 07

In 2012 we wrote a blog post about why many of the biggest government websites were turning to Drupal. The fact is, an overwhelming number of government organizations from state and local branches to federal agencies have chosen to build their digital presence with Drupal, and government continues to adopt Drupal as the content management system of choice. What is it about Drupal that keeps government committed to the platform? And how has Drupal evolved to help government agencies better serve their constituents?

 For digital government decision makers, there are many reasons to choose and stick with Drupal, including:  

  • Increased content publishing efficiencies

  • Flexible and consistent UX

  • Centralized management of many sites

  • Improved accessibility

  • No licensing fees, lower operational maintenance costs

  • Best-in-class security

Let’s look at how these Drupal capabilities are helping government agencies evolve today.

Increased Administrative Efficiencies and Consistent User Experience

In large organizations, interdepartmental collaboration obstacles often translate into inefficient content publishing practices. This is even more compounded when content editors need a developer in the loop to help them make changes. When these hurdles aren’t properly navigated, citizens suffer by not being able to gain easy access to the most relevant and up-to-date information.

Over the years, Drupal has evolved to be more flexible and accommodating for non-technical content admins, providing a highly customizable and user-friendly administration dashboard. Drupal empowers government content admins to design content independent of developers with modules like Paragraphs, which lets content admins rearrange page layouts without code adjustments while enforcing consistency across agency sites.

Check out how the Department Of Energy’s user-centric site design leads the charge in government and competes with the private  sector’s  digital experience.  

Improved Accessibility

To effectively serve the public, government websites must be accessible to an extremely large and diverse audience. At times, this audience may require accommodations for physical disabilities, an array of devices, and multiple languages. With its diverse, worldwide community of contributors, Drupal facilitates meeting accessibility needs on a number of fronts.

Flexible and fully customizable theming makes it possible for Drupal sites to meet Section 508 and WCAG accessibility requirements. Responsive base themes are readily available to give themers a strong foundation for ensuring compatibility with a wide range of access devices. And internationalization is at the cornerstone of Drupal 8 to provide multilingual functionality.

These accessibility features are helping government agencies create a user friendly experience for everyone, and ultimately pushing digital government to follow user-centric design best practices.

Centralized Management of Many Sites

Government agencies are comprised of many offices, each of which requires its own digital presence, content, and architecture. Creating and managing a centralized, consistent experience for constituents across offices and devices can be tricky.

Drupal allows government to develop a platform that runs all sites off of a single codebase, providing better consistency, streamlined maintenance, and facilitating easier content sharing between sites. Editors from one centralized government office can easily publish and push content to multiple sites.

Editors can also quickly create new microsites without seeking developer assistance. This gives them greater agility in posting timely, relevant content for their visitors across many different digital spaces.

Check out how Phase2 helped the State of Georgia move 55 websites from a proprietary system hosted at the state’s data center to a Drupal platform hosted in the cloud.

Reduced Costs

In order to truly evolve, government agencies need to allocate funds to the projects and teams that benefit their constituents, not to hosting services and site customization.

As an open source platform Drupal does not incur any licensing costs. While a large implementation can easily cost hundreds of thousands of dollars just to have the right to use proprietary software, Drupal’s community-developed software is free, saving government millions.

Drupal is also fully customizable from the get go--not only when it comes to features and what site visitors see, but also with regard to editor tools, workflows, user roles and permissions, and more. This means the money that would go towards customization projects is freed up for more appropriate use.

Drupal’s cost saving features enabled the State of Georgia to reduce platform operational costs by  65%.

 Best-In-Class Security

While government agencies have historically been wary of using open source software, with the adoption of Drupal by leading federal agencies like the White House, Department of Energy, and U.S Patent and Trade Office, most of the security myths around open source software were extinguished.

Drupal’s collaborative, open source development model gives it an edge when it comes to security. Throngs of Drupal developers around the globe ensure a constant process of testing, reviews, and alerts which ensure detection and eradication of potential security vulnerabilities. Since thousands of developers dedicate their time and talents to finding and fixing security issues, Drupal can respond very quickly when problems are found. With Drupal 8, there are even more ways the Drupal community has taken action to make this software secure and evolve to respond to new types of attacks.

Government managers considering Drupal, or government users contemplating a migration to Drupal 8, should consider these benefits and how Drupal is helping digital government evolve to be a more efficient, user-friendly, and accessible environment for constituents. For more information on how to use Drupal to increase efficiency and lower costs in government agencies, take a look at the work Phase2 has done with leading government agencies.

Jul 26 2017
Jul 26

Introduction

One of the greatest improvements added in Drupal 8 was the Configuration Management (CM) system. Deploying a site from one environment to another involves somehow merging the user-generated content on the Production site with the developer-generated configuration from the Dev site. In the past, configuration was exported to code using the Features module, which I am a primary maintainer for.

Using the D8 Configuration Management system, configuration can now be exported to YAML data files using Drupal core functionality. This is even better than Features because a) YAML is a proper data format instead of the PHP code that was generated by Features, and b) D8 exports *all* of the configuration, ensuring you didn’t somehow miss something in your Features export.

“Drupal 8 sites still using Features for configuration deployment
need to switch to the simpler core workflow.”

Complex sites using Features for environment-specific configuration, or multi-site configurations should investigate the Config Split module. Sites using Features to bundle reusable functionality should consider if their solutions are truly reusable and investigate new options such as Config Actions.

Features in Drupal 8

When we ported Features to Drupal 8, we wanted to leverage the new D8 CM system, and return Features to its original use-case of packaging configuration into modules for reusable functionality. New functionality was added to Features in D8 to help suggest which configuration should be exported together based on dependencies. The idea was to stop using Features for configuration deployment and instead just use it to organize and package your configuration.

We’ve found that despite the new core configuration management system designed specifically for deployment, people are still using Features to deploy configuration. It’s time to stop, and with a few exceptions, maybe it’s time to stop using Features altogether.

Problems using Features

Here is a list of some of the problems you might run into when using Features to manage your configuration in D8:

  1. Features suggests configuration to be exported with your Content Type, but after exporting and then trying to enable your new module, you get “Unmet dependency” errors.

  2. You make changes to config, re-export your feature module, and then properly create an update-hook to “revert” the feature on your other sites, only to find you still get errors during the update process.

  3. You properly split your Field Storage config from your Field Instance so you can have multiple content types that share the same field storage, but when you revert your feature it complains that the field storage doesn’t exist yet. This is because you didn’t realize you needed to revert the field storage config *first*.

  4. You try to refactor your config into more modular pieces, but still run into what seems like circular dependency errors because you didn’t realize Features didn’t remove the old dependencies from your module.info file (nor should it).  

  5. You decide to try the core CM process using Drush config-export and config-import commands, but after reverting your features your config-export reports a lot of uuid changes. You don’t even know what uuid it’s talking about or which uuids changes you should accept.

  6. You update part of your configuration and re-export your module. When you revert your feature on your QA server, you discover that it also overwrote some other config changes that were made via the UI that somebody forgot to add to another feature.

  7. The list goes on.

Why Features is still being used

Given all of the frustrating complications with Features in D8, why do some still use it?  After all, up until a few months ago it was the default workflow even in tools such as Acquia BLT.

Most people who still use Features typically fall into two categories:

  1. “My old D7 workflow using Features still seems to mostly work, I’m used to it and just deal with the new problems, and I don’t have resources to update my build tools/process.”

  2. “I am building a complex platform/multi-site that needs different configuration for different sites or environments and having Features makes it all possible. I don’t have to worry about non-matching Site UUIDs.”

People in the first category just need to learn the new, simpler, core workflow and the proper way to manage configuration in Drupal 8. It’s not hard to learn and will save you much grief over the life of your project. It is well worth the time and resource investment.

Until recently, people in the second category had valid concerns because the core CM system does not handle multiple environments, profiles, distributions, or multi-site very well. Fortunately there are now some better solutions to those problems.

Handling environment-specific config

Rather than trying to enable different Features modules on different environments, use the relatively new Config Split module. Config Split allows you to create multiple config “sync” directories instead of just dumping all of your config into a single location. During the normal config-import process it will merge the config from these different locations based on your settings.

For example, you split your common configuration into your main “sync” directory, your production-specific config into a “prod” directory, and your local dev-specific config into a “dev” directory. In your settings.php you tell Drupal which environment to use (typically based on environment variables that tell you which site you are on).

When you use config-import within your production environment, it will merge the “prod” directory with your default config/sync folder and then import the result. When you use config-import within your local dev environment, it will merge the “dev” directory with your default config and import it. Thus, each site environment gets its correct config. When you use config-export to re-export your config, only the common config in your main config/sync folder is exported; it won’t contain the environment-specific config from your “dev” environment.

Think of this like putting all your “dev” Features into one directory, and your “prod” Features into another directory. In fact, you can even tell Config Split which modules to enable on different environments and it will handle the complexity of the core.extension config that normally determines which modules are enabled.

Acquia recently updated their build tools (BLT) to support Config Split by default and no longer needs to play its own games with deciding which modules to enable on which sites. Hopefully someday we’ll see functionality like Config Split added to the core CM system.

Installing config via Profiles

A common use-case for Features is providing common packaged functionality to a profile or multi-site platform/distribution. Features strips the unique identifiers (UUIDs) associated with the config items exported to a custom module, allowing you to install the same configuration across different sites. If you just use config-export to store your site configuration into your git repository for your profile, you won’t be able to use config-import to load that configuration into a different site because the UUIDs won’t match. Thus, exporting profile-specific configuration into Features was a common way to handle this.

Drupal 8 core still lacks a great way to install a new site using pre-existing configuration from a different site, but several solutions are available:

Core Patches

Several core issues address the need of installing Drupal from pre-existing config, but for the specific case of importing configuration from a *profile*, the patch in issue #2788777 is currently the most promising. This core patch will automatically detect a config/sync folder within your profile directory and will import that config when the profile is installed and properly set the Site UUID and Config UUIDs so the site matches what was originally exported. Essentially you have a true clone of the original site. If you don’t want to move your config/sync folder into the profile, you can also just specify its location using the “config_install” key in the profile.info.yml file.

This patch isn’t ideal for public distributions (such as Lightning) because it would make the UUIDs of the site and config the same across every site that uses the distribution. But for project-specific profiles it works well to ensure all your devs are working on the same site ID regardless of environment.

Using Drush

Another alternative is to use a recent version of “drush site-install” using its new “--config-dir=config/sync” option. This command will install your profile, then patch the site UUID and then perform a config-import from the specified folder. However, this still has a problem when using a profile that creates its own config since the config UUIDs created during the install process won’t match those in the config/sync folder. This can lead to obscure problems you might not initially detect that cause Drupal to detect entity schema changes only after cron is run.

Config Installer Project

The Config Installer project was a good initial attempt and helped make people aware of the problem and use-case. It adds a step to the normal D8 install process that allows you to upload an archived config/sync export from another site, or specify the location of the config/sync folder to import config from. This works for simple sites, but because it is a profile itself, it often has trouble installing more complex profile-based sites, such as sites created from the Lightning distribution.

Reusable config, the original Features use case

When building a distribution or complex platform profile, you often want to modularize the functionality of your distribution and allow users to decide which pieces they want to use. Thus, you want to store different bits of configuration with the modules that actually provide the different functionality. For example, placing the “blog” content type, fields, and other config within the “blog” module in the distro so it can be reused across multiple site instances. This was often accomplished by creating a “Blog Feature” and using Features to export all related “blog” configuration to your custom module.

Isn’t that what Features was designed for? To package reusable functionality? The reality is that while this was the intention, Feature modules are inherently *not* reusable. When you export the “blog” configuration to your module, all of the machine names of your fields and content types get exported. If you properly namespaced your machine names with your project prefix, your project prefix is now part of your feature.

When another project tries to reuse your “Blog Feature”, they either need to leave your project-specific machine names alone, or manually edit all the files to change them. This limits the ability to properly reuse the functionality and incrementally improve it across multiple projects.

Creating reusable functionality on real-world complex sites is a very hard problem and propagating updates without breaking or losing improvements that have been made makes it even harder. Sometimes you’ll need cross-dependencies, such as a “related-content” field that is used on both Articles and Blogs and needs to reference other Article and Blog nodes. This can seem like a circular dependency (it’s not) and requires you to split your Features into smaller components. It also makes it much more difficult to modularize into a reusable solution. How is your “related-content” functionality supposed to know what content types are on your specific site that it might need to reference?

Configuration Templates

We have recently created the Config Actions and Config Templates modules to help address this need. It allows you to replace the machine names in your config files with variables and store that as a “template”. You can then use an “action” to reference that template and supply values for the variable and import the resulting config.

In a way, this is similar to how reusable functionality is achieved in a theme using SASS instead of CSS. Instead of hardcoding your project-specific names into the CSS, you create a set of SASS files that use variables. You then create a file that provides all the project-specific variable values and then “include” the reusable SASS components. Finally, you “compile” the SASS into the actual CSS the site needs to run.

Config Actions takes your templates and actions and “runs” them by importing the resulting config into your D8 site, which you then manage using the normal Configuration Management process.  This allows you to split your configuration into reusable templates/actions and the site-specific variable values needed for your project. Config Templates actually uses the Features UI to help you export your configuration as templates and actions to make it more useable.

Stay tuned for my next blog where I will go into more detail about how to use Config Actions and Config Templates to build reusable solutions and other configuration management tricks.

Conclusion

While the Drupal 8 Configuration Management system is a great step forward, it is still a bit rough when dealing with complex real-world sites. Even though I have blogged in the past about “best practices” using a combination of Features and core CM, recent tools such as Config Split, installing config with profiles, and Config Templates and Actions all help better solve these problems. The Features module is really no longer needed and shouldn’t be used to deploy configuration. However, Features still provides a powerful UI and plugin system for managing configuration and in combination with new modules such as Config Actions it might finally achieve its dream of packaging reusable functionality.

To learn more about Advanced Configuration Management, come to my upcoming session at GovCon 2017 or DrupalCamp Costa Rica.  See you there!

Jun 29 2017
Jun 29

Drupal 8 Introduction

Drupal 8 is a very flexible framework for creating a Community support  Site.  Most of the functionality of the community site can be achieved via custom content types, views, and related entities, and by extending various core classes.  You no longer need special-purpose modules such as Blog or Forum.

This blog will introduce several useful modules in Drupal 8 that are typically used when building a community site.

 

Segmenting the community into Groups

The key module in any community site is responsible for subdividing the users and content into smaller groups or sections. Moderation of content in a group is often assigned to a specific collection of users. Users can join a group to contribute or discuss content.

Drupal 8 has two competing modules for splitting a site into Groups:  Group and Organic Groups (OG). While both also existed in Drupal 7, several architectural changes have been made in D8.

The Group Module

The Group module makes flexible use of the Drupal entity system. Each Group is an entity, which can have multiple bundle “group types”. Any other entity, such as a Node can be associated with a Group via a relationship entity, confusingly called a “group content” entity. Note that the group content entity doesn’t contain the content itself, it is merely an entity that forms the relation between the group and the actual content. The “gnode” submodule provides the group content entity for Nodes.

Each group has a set of roles and permissions that users can be assigned to. Pre-created roles include “admin”, “member”, “outsider”, and “anonymous”. For example, members of a group can be given permission to create or comment on content. If a user is not assigned as a member (outside or anonymous) they might be able to view content but not add or discuss it.

A patch (issue #2736233) adds the “ggroup” submodule which provides the group content entity for groups themselves. This allows one group to be added to another as a “subgroup”. You can currently map roles between the parent group and the subgroup, but you cannot directly inherit users from the parent group; users must also be added to the subgroup.

Because any entity can be related to a group via a custom group content entity, this module is highly flexible and customizable. Various patches and submodules are available for associating menus, domain names, notifications, taxonomy terms, and other entities to a group.

The Group module is under active development and currently has an RC1 release, with a full stable release expected shortly.

Organic Groups

The Organic Groups (OG) module was very popular in Drupal 7 and even used as the basis for the Open Atrium distribution. Many people were not aware that OG was being ported to Drupal 8 because the development was done in Github, away from the normal drupal.org project page. OG is also under active development, but no releases have been made; just a develop branch is available.

In OG, any Node can be marked as a Group. So a group is still an “entity”, but it is specifically a node entity. OG also has roles and permissions, such as “member”. However, when a user is added to a group, a og_membership relationship entity is created. Node content is placed into a group via a simple entity-reference field on the node, pointing to the group node that it belongs to.

A github issue is available to allow a Group to belong to another group (subgroups), but it is more complex than simply adding an entity-reference field from one Group node to another. No concept of role mapping or user inheritance is available, nor is it planned.

While OG was used extensively in Drupal 7, its lack of releases and its more complex architecture has led many sites to start using the Group module instead, which has a more consistent framework for developers who need to integrate other entity types.

 

Subscriptions and Notifications in Drupal 8

To engage users on a community site, it must be easy to subscribe to interesting groups, content, and other users. The Message API module stack has been ported to Drupal 8 and provides the underlying message framework and email notification systems needed to tell users when their subscribed content has been added or updated. Phase2 has heavily contributed to the development and enhancement of the Message stack in Drupal 8, including the  Message Digest module that allows email notifications from Message Subscribe to be grouped into daily or weekly digests.

Flags

The message subscription feature uses the Flag module, which is also a key module on most community sites. In Drupal 8, a Flag is a fieldable entity. The ability to add custom fields to a flag makes them even more useful than in the past. For example, the Message Digest module can store the frequency of digest (daily, weekly, etc) as part of the subscription flag itself.

Rating Content

Community sites often contain a huge amount of content, and tools to help users sift through this content are needed. One way to find the most useful content is to allow users to “rank” content, either with a numeric rating system, or just a simple like/dislike. While a Flag could be used to mark a node as “liked”, the Rate module provides a more flexible mechanism, allowing you to choose between different rating systems, all using the Vote API module.

Since Comments are also entities in Drupal 8, you can rate comments, list them in ranked order of relevance, and even add a Flag for marking a specific comment as “the right answer”.

Rewarding Users

To reward your most active users, a “point system” is often used. Users who post correct answers, or have the highest rated content earn more points. Adding this and other gamification features to your site can encourage the growth of a vibrant and active community.

You can add a custom Point field to the User entity in Drupal 8 and write some rules for adding (or subtracting) points based on various conditions. With Drupal 8's flexible framework, you can easily integrate third party gamification features into your platform as well.

 

Community Content

Community sites live and die by having useful, engaging, and relevant content. Sites will often have several different types of content, such as articles, blogs, events, documents, etc. If you try to use a module dedicated to a specific type of content, you’ll often need to override or customize behavior. In most cases it is more effective to simply create your own content type, add some custom fields, and create some views.

Take advantage of the object-oriented nature of Drupal 8. If you need a new field that has slightly custom behavior, simply extend a base field class that is close to what you need and modify it. Don’t be afraid of some custom development to achieve the specific needs of your site. Since you can often just inherit base classes from core, it’s easier to write simple and secure extensions.

 

Conclusion

Every community site is different. How you best engage your community is greatly dependent on the content that is created, and by providing good tools to create and manage that content. But each community has unique motivators. Rather than using a one-size-fits-all product for your community, analyze and understand your specific requirements and prioritize those that best engage your users.

Most of the modules described here are under active development. Many do not have full stable releases yet in Drupal 8, but this is improving rapidly. If you see functionality close to your needs, get involved and help contribute to a module to make it better.

In Drupal 8 it’s even easier than ever to use the enhanced core functionality along with some basic modules to achieve your needs.

Interested in learning how to speed up your support team with community content management best practices? Check out this blog!

May 16 2017
May 16

Columbia University is taking proactive steps to ensure its predominantly Drupal-based digital properties are offering the best possible experience to site visitors. Using Acquia’s Lightning distribution as a base, the CUIT team has begun to roll out a new platform on Drupal 8.

May 16 2017
May 16

Goal: Getting PHP Sessions into Redis

One of several performance-related goals for NBA.com was to get the production database to a read-only state. This included moving cache, the Dependency Injection container, and the key-value database table to Redis.  99% of all sessions were for logged-in users, which use a separate, internal instance of Drupal; but there are edge cases where anonymous users can still trigger a PHP session that gets saved to the database.

May 16 2017
May 16

DrupalCon 2017 may be over, but we’re still feeling the impact. Last week 20+ Phase2 team members and over 3,000 sponsors, attendees, and speakers converged on Baltimore for 5 days of Drupal.

In case you weren’t able to join us in person, here is a recap:

 

Impact At The Phase2 Booth

May 16 2017
May 16

The original purpose of the Features module was to “bundle reusable functionality”. The classic example was a “Photo Gallery” feature that could be created once and then used on multiple sites.

In Drupal 7, Features was also burdened with managing and deploying site configuration. This burden was removed in Drupal 8 when configuration management became part of Core, allowing Features to return to its original purpose.

May 16 2017
May 16

Building an online community is not a new topic, but the market is refocused on its growing importance because these online communities can increase customer retention, decrease customer support expenses, and increase profits.

May 16 2017
May 16

Pattern Lab is many wonderful things: a style guide, a component inventory, a prototyping system, and the embodiment of a design philosophy, all wrapped inside a fundamentally simple tool – a static site generator. It has greatly improved Phase2’s approach to how we build, theme, and design websites. Let’s talk about what Pattern Lab is, how we use it in our process by integrating it into the theme of a CMS like Drupal or WordPress, and the resulting change in our development workflow from linear to parallel.

Note: We’ll be discussing this topic in our webinar on June 16th. Register here!

What is Pattern Lab?

Pattern Lab allows us to easily create modular pieces of HTML for styling & scripting. We call these modular pieces of HTML components – you may have already heard of the iconic atomsmolecules, and organisms. Pattern Lab provides an easy-to-use interface to navigate around this component inventory.

Pattern Lab also does much more: it fills the role of a style guide by showing us colors, fonts, and font sizes selected by the design process and demonstrates common UI elements like buttons, forms, and icons along with the code needed to use them. That part is important: it’s the distinction between “this is what we’re going to build” and “this is what has been built and here’s how to use it.” Pattern Lab provides a handbook to guide the rest of your complex CMS.

Pattern Lab menu

We can also prototype with Pattern Lab because it supports “partials.” Partials allow our components to contain other components, giving us modular components. This lets us reuse our work in different contexts by not repeating ourselves, ensuring consistency of our design across a wide set of pages and viewports, and reducing the number of bugs and visual inconsistencies experienced when each page contains unique design elements. It supports either low fidelity “gray-boxing” or high fidelity “it looks like the finished site” prototyping. You can see an example of this by looking at the “Templates” and “Pages” in Pattern Lab below.

Templates and pages

To summarize, Pattern Lab is a style guide, a component inventory, and a prototyping environment where we can see our designs in the medium they are destined for: the browser! Now, let’s talk about the old way of doing things before we discuss how we implement this tool, and the difference the new way makes.

The Old Way

Old design workflow

Generally speaking (and greatly simplifying), the old process involved several linear steps that effectively block subsequent steps: the goal of each step is to create a deliverable that is a required resource for the next step to start. The main point I want to make is that in order for the front-end developers to implement the designs they need HTML, so they have to wait for the back-end developer to implement the functionality that creates the HTML.

Front-end developers just need HTML. We don’t need the HTML required for the complex logic of a CMS in order to style it. We just need HTML to style and script so we can create our deliverables: the CSS & JavaScript.

To reiterate this point: front-end devs just need HTML, wherever it comes from.

Now that we’ve set the stage and shown the problem, let’s take a look at the way we implement Pattern Lab and how that helps improve this process.

Integrating Pattern Lab and the CMS Theme

Instead of keeping our Pattern Lab site (which contains our prototype and style guide) separate from the CMS, we keep them together. Pattern Lab is just a simple static site generator that takes HTML shorthand and turns it into HTML longhand. We just put the Pattern Lab folder inside the theme right here (for a Drupal site): /sites/all/themes/theme-name/pattern-lab/. Now it’s next to other fundamental assets like CSS, JavaScript, Images, and Fonts. Sharing these assets between Pattern Lab and our CMS is a huge step forward in improving our process.

Folder Structure

theme-name/ css/ style.css js/ script.js pattern-lab/ source/ # (HTML Shorthand) public/ # (HTML Longhand - a.k.a The Pattern Lab Site) templates/ *.tpl.php # (All our CMS template files)

1

2

3

4

5

6

7

8

9

10

theme-name/

css/

style.css

js/

script.js

pattern-lab/

source/ # (HTML Shorthand)

public/ # (HTML Longhand - a.k.a The Pattern Lab Site)

templates/

*.tpl.php # (All our CMS template files)

Sharing CSS & JS Assets

With Pattern Lab inside our CMS theme folder, all we really need to do to “integrate” these two is include this HTML tag in Pattern Lab to use the CSS that our CMS theme is using:

<link href="https://www.phase2technology.com/pattern-lab-taking-our-workflow-from-a-..." rel="stylesheet" media="all">

1

<link href="../../../../css/style.css" rel="stylesheet" media="all">

And then include this HTML tag to use the CMS theme’s JavaScript:

<script src="https://www.phase2technology.com/pattern-lab-taking-our-workflow-from-a-..."></script>

1

<script src="../../../../js/script.js"></script>

Shared assets folder structure

How This Helps

All a web page needs is HTML for its content, CSS for styling that content, and JavaScript for any interaction behavior. We’ve now got two ways to make HTML: programming the CMS (which takes a lot of time) to be able to let non-technical content editors create and edit content that generates the HTML, or using Pattern Lab to write “HTML shorthand” much, much quicker. Both of those environments are linked to the same CSS and JavaScript, effectively sharing the styling and interaction behavior to both our CMS and Pattern Lab.

Now, most of the time we’re not working with our clients just to make style guides and prototypes; we’re making complex CMS platforms that scale in some really big ways. Why would we want to waste time creating the HTML twice? Well, sites this big take time to build right. Remember that the front-end developers are usually waiting for the back-end developers to program in the functionality of the CMS, which ends up creating the HTML, which the front-end developers style by writing CSS & JS.

All we need is some HTML to work with so we know our CSS and JS is working right. We don’t care if it’s editable to content editors at this point, we just want it to look like the comps! Now that front-end devs have an environment in Pattern Lab with real HTML to style and script, we can bring the comps to life in the browser with the added benefit of CSS & JS being immediately available to the CMS theme. We are effectively un-blocked, free to work outside the constraints of a back-end bottleneck. This shift from a linear process to one where back-end and front-end development can happen concurrently in a parallel process is a major step forward. Obvious benefits include speed, less re-work, clarity of progress, and a much earlier grasp on UI/UX issues.

The New Workflow

Parallel iterative process

With our style guide sharing CSS & JS with our CMS theme, we can pull up Pattern Lab pages exposing every button – and every size and color button variation – and write the CSS needed to style these buttons, then open up our CMS and see all the buttons styled exactly the way we want. We can get an exhaustive list of each text field, select box, radio button and more to style and have the results again propagate across the CMS’s pages. Especially when armed with a knowledge of the HTML that our CMS will most likely output, we can style components even before their functionality exists in the CMS!

As the back-end functionality is programmed into the CMS, HTML classes used in the Pattern Lab prototype are simply applied to the generated HTML to trigger the styling. It doesn’t matter too much if back-end or front-end start on a component first, as this process works in either direction! In fact, design can even be part of the fun! As designers create static comps, the front-end devs implement them in Pattern Lab, creating the CSS available in the CMS as well. Then, the Pattern Lab site acts to the designers as a resource that contains the summation of all design decisions reflected in a realistic environment: the browser. The designers can get the most up-to-date version of components, like the header for their next comp, by simply screen shooting it and pulling it into their app of choice. This frees the designers from the minutia of ensuring consistent spacing and typography across all comps, allowing them to focus on the specific design problem they’re attempting to solve.

When designers, front-end developers, and back-end developers are iteratively working on a solution together, and each discipline contributes their wisdom, vision, and guidance to the others, a very clear picture of the best solution crystallizes and late surprises can often be avoided.

This parallel process brings many advantages:

  • Front-end can start earlier – often before a CMS and its environment is even ready!
  • Easy HTML changes = quick iteration.
  • Back-end has front-end reference guide for markup and classes desired.
  • Pattern Lab acts as an asset library for designers.
  • Project managers and stakeholders have an overview of progress on front-end components without being encumbered by missing functionality or lack of navigation in the CMS.
  • The progress of each discipline is immediately viewable to members of other disciplines. This prevents any one discipline from going down the wrong path too far, and it also allows the result of progress from each discipline to aid and inform the other disciplines.
  • Shared vocabulary of components and no more fractured vocabulary (is it the primary button or the main button?). Pattern Lab gives it a label and menu item. We can all finally know what a Media Block objectively is now.

Conclusion

By decoupling the creation of a design system based in CSS and JavaScript (styling) from the process of altering the HTML that our CMS is generating (theming), we’re able to remove the biggest blocker most projects experience: dependence on the CMS for CSS & JS to be written. We avoid this bottleneck by creating a nimble environment to build HTML that allows us to craft the delieverables of our design system: CSS & JS. We’re doing this in a way that provides these assets instantly to the CMS so the CMS can take advantage of them on the fly while the site is being built concurrently, iteratively, and collaboratively.

May 16 2017
May 16

In 1994, I was a huge fan of the X-Men animated series. I distinctly remember an episode titled “Time Fugitives”, which featured Cable, a time-traveling mutant vigilante from the future, talking to a floating cube that gave him historical information about the X-Men of the past. I never thought that technology would exist in my lifetime, but I found myself a week ago sitting in my living room asking my Google Home (which resembles an air freshener rather than a cube) questions about historical context.

Conversational UI’s - chatbot and voice assistant technologies - are becoming commonplace in consumer’s lives. Messaging apps alone account for 91% of all time people spent on mobile and desktop devices. Soon, almost every major smartphone and computer will be equipped with Siri, Google Assistant, Cortana, or Samsung’s Bixby. These voice assistants are even being integrated into common home electronics - televisions, set-top boxes, video game units, and even washing machines and refrigerators. Sales of home personal assistants are on the rise, with the Amazon Echo alone having increased sales nine-fold year over year. Search giants Google and Microsoft are reporting significant increases in voice searches, each claiming about 25% of mobile searches are now performed using voice.

Graphic from e-consultancy that shows the Google trends from 2008 - 2016Source: Should financial services brands follow Capital One on to Amazon Echo?, E-Consultancy, 2017

The trends are clear - conversational UI’s are only becoming more prevalent in our lives, and some predict will replace common technologies that we use today. And in order to continue to engage audiences wherever they are, in the way they prefer to engage, companies should be investing in developing apps that leverage these technologies at home and in the workplace.

Benefits of Building Applications for Conversational UI’s Now

While you may question the business benefits of developing applications that leverage conversational UI’s at such an early stage in the maturation of this technology, there are some clear benefits that come with being on the leading edge of leveraging new technologies to engage consumers:

Early adoption can lead to great PR

Standing on the leading edge and developing applications for these emerging platforms can present a great opportunity to earn publicity, and position your organization as an innovative brand. An example of this can be seen in this eConsultancy article about CapitalOne’s Amazon Echo skill.

You can test new market opportunities

Conversational UI’s may present an opportunity to engage with a market that your organization is not currently. You may identify opportunities to gain new customers, improve customer satisfaction, or create new revenue streams by extending existing products and services into platforms with voice and chat interfaces. Some companies are already starting to offer paid tiers for services delivered via or selling advertising on conversational UI applications.

Early adoption can provide a competitive advantage

While being first to market with a conversational UI app is not always a guarantee of success, it can provide you a leg up over the competition. If you start early, you will have an opportunity to identify best approaches to engage consumers on these new platforms, allowing you to have a well-defined experience once your competitors enter the market. Your brand may also be able to secure a percentage of the market share early due to a lower cost of user acquisition.

US consumers are creatures of habit, and prefer to go back to familiar stores, products, and services they trust. In an ideal scenario, your conversational UI application will become integrated into consumer’s work and or home life before the market is saturated.

Potential Drawbacks

In all fairness, developing a conversational UI application is not easy. There are some risks associated that we would be remiss if we did not inform you of:

  • This is still the wild-wild west - very few best practices or standards have been established.

  • It can be expensive to develop and implement, across the myriad of devices/services.

  • There is a potentially high learning curve depending on the platform you are building for and technologies you use to develop your app.

  • At this time, there are no clear methods for efficiently testing features on voice assistant applications.

  • Deployment of content to this various platforms may require use of many different CMS systems.

While there are risks associated with this starting to leverage conversational UI applications, the long-term benefits may outweigh the short-term losses.

Stay tuned for part 2, where we will discuss how you can start leveraging conversational UI applications to build your brand and grow your business.

May 16 2017
May 16

With the recent launch of Penn State University’s main site and news site, we were able to help Penn State breathe new life into their outdated online presence, to allow prospective and current students alike to have the best experience possible on a wide array of devices. Working closely with the PSU design team, we created a complete experience from desktop to mobile, utilizing popular design patterns that would help guide the user experience while never fully stripping away content from the end user.

Utilizing the Omega Theme, we used the default media queries of mobile, narrow and normal or otherwise known as under 740px (mobile), under 980px (tablet) and anything else above (desktop). These media queries really helped the PSU design team explore the possibilities of what was possible at each one of these breakpoints and how fundamental elements can be optimized for the device that they are being displayed on. Most notable were, menus, search, curated news boxes, and featured article headers were all areas where the PSU designers and Phase2 teamed up to bring out the best experience for each breakpoint.

 Menus:

Whether we are talking about main menus, secondary, or even tertiary, all menus have their place and purpose for guiding the user through the site to their destination. The PSU design team never forgot this principal, and substantial effort went into making sure menu items were reachable at all breakpoints. While the main menu follows standard design patterns, the desktop to tablet change is just a slightly more condensed version of the original, and made to optimize the horizontal space of a tablet in portrait mode. Moving down to mobile, we made the biggest changes. The main menu collapses to a large clickable/tap-able button that reveals/hides a vertical menu with large target areas, optimized for mobile.

The secondary menu also behaves in a similar fashion to the main menu by collapsing down to a larger clickable button that reveals menu items also enlarged in order to visually gain appeal while also providing a larger area for users to click on. The transformation happens earlier at the tablet level as we felt that the condensed horizontal space would make the tex-only menu items harder to read and more difficult to click on for smaller screens.

Search:

Search was another component that Penn State needed to emphasize  throughout the site. It was very important to leave this as simple as possible, so like the menus, it was decided to collapse the search, for mobile only, into a drawer reveal that focused on simplicity and a large focus area. Again, we went with a large icon that helped by having a large target area for the mobile and tablet experience.

Curated news boxes:

On the homepage, the curated news boxes provided a fun canvas to work with content that shifts around as the device changes from desktop to mobile. Knowing that space is limited in the mobile realm, it was important to provide something visually pleasing, but that would also still engage the user to click through a news story. So iconology was used to capture the specific type of news piece while the title was left to engage the user into clicking through to the story.

Mobile curated boxes

Tablet Curated Boxes

Featured Article Header:

Imagery was crucial to the PSU redesign strategy. It was only natural to have engaging treatments to the featured article headers. If the article header implemented a slideshow, we used flexslider. Otherwise, simple css scaled the images per breakpoint. The meta description related to the image would truncate and shift around depending on the breakpoint for better readability and appearance.

By implementing responsive design patterns, we were able to help the PSU team achieve their goal of making their online content and news accessible by any device.

May 16 2017
May 16

No doubt you’ve heard the phrase “Content is King.” But what exactly is content? The precise definition is subjective – it is influenced by the context in which it is defined. There is no universal definition within the industry, and it is highly likely there is no single definition within your organization.

To have a successful content strategy, it is critical that your organization determines precisely what content means to you, as its definition will inform your entire editorial experience.

An Efficient Editorial Experience

When designing editorial experiences, there is inherent friction between system architecture and user experience. The more complex the structure, the less usable the editorial experience of your CMS becomes. Content strategists strive to follow best practices when modeling content, but these object-oriented models do not take into account the workflow of tasks required to publish content.

Modern content management platforms offer organizations a variety of entities used to build an editorial experience – content types, taxonomies, components, etc. Although editors and producers learn how to use them over time, there can be a steep learning curve when figuring out how to combine these entities to perform tasks, like creating a landing page for a campaign. That learning curve can have two adverse effects on your websites:

  1. You lose efficiency in the content creation process, leading to delayed launches and increased costs.

  1. Incorrect use of the CMS, resulting in increased support costs of ownership.

Content Management Best Practice: Focus on Tasks

Avoid these risks by designing task-based editorial experiences. Task-based user interfaces, like Microsoft Windows and Mac OS X, present quick paths to whatever task your content creator wants to accomplish, rather than allowing the user to plot their own path. The greatest efficiencies can be gained by creating a single interface, or multistep interface, for accomplishing a task. Do not require the user to access multiple administrative interfaces.

To enable this set-up, perform user research to understand how content is perceived within your organization and how users of your CMS expect to create it. This is easily done by conducting stakeholder interviews to define requirements. Our digital strategy team has also found success in following practices found in the Lean methodology, quickly prototyping and testing editorial experiences to validate assumptions we make about users’ needs.

To ensure the success of your content operations, define the needs and expectations of the content editors and producers first and foremost. Equally important, prioritize tasks over CMS entities to streamline your inline editorial experience for content producers and editors.

May 16 2017
May 16

Over the past year, we’ve had the joy of working with Cycle for Survival to update the organization’s digital assets. But there’s more than one way to make an impact, so this weekend we set out to fundraise and participate in a Cycle for Survival team ride in New York City. Needless to say, it was a fun and inspirational event.

Group of Phase2 staff at Cycle for Survival

We invited Brandy Reppy, Memorial Sloan Kettering’s Associate Director of Online Operations, to share how digital technology has made an impact on the organization.

What is the Cycle for Survival mission?

Cycle for Survival is the national movement to beat rare cancers. Through a series of indoor team cycling events, Cycle for Survival raises funds that are critical for rare cancer research with 100% of every donation being directly allocated to Memorial Sloan Kettering Cancer Center within six months of the events.

Rare cancer research is drastically underfunded resulting in fewer treatment options for patients. With fewer resources devoted to understanding and treating these diseases, patients face uncertain futures – Cycle for Survival is committed to changing that.

A woman riding at Cycle for Survival holding a sign that says Phase2 rides with MSK

How does digital technology impact your mission?

Fundraising for Cycle for Survival focuses on peer-to-peer interactions. Participants register online for an event and fundraise for their team via the website. Digital technology is pivotal to allowing participants to navigate our website easily during registration and fundraising. Our website also houses critical information for our participants and their donors, so it’s critical that they can access this information seamlessly.

3 Phase2 staff members holding hands at Cycle for Survival

In what ways does Phase2 support CFS in this effort?

With Phase2, Cycle for Survival is able to efficiently manage and update digital assets. These are key resources for our participants and donors – things like updates from around the organization, information on how to get involved, and what we are doing with the funds raised – that need to be easy to access. In working with Phase2, we’ve been able to streamline the process of maintaining these assets and branding elements.

What technical strides have we made together?

With Phase2, we’ve been able to be more efficient with time and resources spent on our digital assets and have been able to quickly manage our content. The major shift has been in having a responsive site (instead of a separate mobile one). This creates one seamless experience across many devices, which allows our visitors to easily access all their information from any browser or device, and allows us to manage one code base.

May 16 2017
May 16

Developer Soft Skills

One of my earliest jobs was customer service for a call center. I worked for many clients that all had training specific to their service. No matter the type of training, whether technical or customer oriented, soft skills were always a included. Margaret Rouse said, “Soft skills are personal attributes that enhance an individual’s interactions, career prospects and job performance. Unlike hard skills, which tend to be specific to a certain type of task or activity, soft skills are broadly applicable.”

In this blog series I will be discussing what I call “developer soft skills.” The hard skills in development are (among others) logic, languages, and structure. Developer soft skills are those that help a developer accomplish their tasks outside of that knowledge. I will be covering the following topics:

  • Online research
  • Troubleshooting
  • Enhancing/Customizing
  • Integrating
  • Architecting

Part 1: Online Research

One of the first skills a developer should master is online researching. This is an area with some controversy (which will be discussed later) but a necessary skill for learning about new technologies, expanding your knowledge, and solving problems.

One of the best reasons for research is continuous education. For many professions (such as the military, education and medical fields) continuing education is required to keep up on updated information, concepts, and procedures. As a developer, continuing to grow our skill set helps us develop better projects by using better code, better tools, and better methods.

Search engine queries

When researching a topic on the internet it usually involves using a search engine. Understanding how a search engine works and how to get to the results.There are two parts to how a search engine works. Part one is data collection and indexing. Part two is searching or querying that index. I will be focusing on how to write the best possible query, to learn more about how search collect and index data see this link. In order to write good queries we should understand how search engines respond to what we type into the search box. Early search results were rendered based on simple (by today’s standards) comparison of search terms to indexed page word usage and boolean logic. Since then search engines have started to use natural language queries.

So we can get better results by using this to our advantage. If I wanted to research how to make a calendar with the Java programming language. instead of searching for keywords and distinct ideas “java -script calendar” by themselves; use natural language to include phraseology and context in our queries: “how can I make a calendar with java”. The first result from the keyword search returns a reference to the Java Calendar class. The first result from the second query return example code on writing a calendar in Java. The better the query the better the results.

Search result inspection

Once we have the right query we can then turn our attention to the results. One of the first things I do is limit the results to a date range. This prevents results from the previous decade (or earlier) to be displayed with more recent and applicable ones. Another way to focus our search is to limit the site that the search takes place on. If we know we want to search for a jQuery function search jquery.com.

Google Search PHP with

Once we have filtered our results, it’s time for further inspection. When viewing a results page, the first thing I look for is the context of the article or post. Does the author and/or site have a lot of ads? This can sometimes mean that the site is more about making money then providing good answers. Does the page have links or other references to related topic or ideas? This can show if the author is knowledgeable in the subject matter.

The controversy

Earlier I mentioned online researching can be a controversial topic. One of the points of controversy is discussed in Scott Hanselman’s blog post, Am I really a developer or just a good googler? While I agree with his major point, that researching bad code can be dangerous, I contend that using a search engine can produce good results and learning opportunities.

Almost anytime you search for any programming topic, one site or group of sites is predominant in almost every result: Stack Overflow or the Stack Exchange group of sites. Several articles have been written about reasons not to useconsequence of using and why some developers no longer use Stack Overflow. Using Stack Overflow will not solve all your problems or make you a better developer.

Again, these arguments make some good points. But I think that using Stack Overflow correctly, just like good use of search engines, can produce good results. Using a Stack Exchange site comes with the benefit of community. These sites have leveraged Stack Exchange Q&A methodology for their specific topic or technology and can be a great resource on how to solve a problem within the bounds of that community. One of my development mentors told me that there were thousands of ways to solve a programming problem and usually several wrong ones. The key is to not do one of the wrong ones and try to find one of the best ones. Searching within a Stack exchange site for answers can highlight the wrong ones but also provide the ones that work best in that system.

Here is an example of a Stack Overflow Drupal community response that came up when I searched for: “drupal create term programmatically.”

Image of programmatically creating taxonomy term

This response is correct, but if you look at the link provided, you will see this is for Drupal 6. If you were looking for how to do this in Drupal 7, for instance, the answer provided would not be correct. We could have improved our results by adding “Drupal 7″ to our query. But most important is to keep in mind that sites like Stack Overflow, or other community sites such as php.net include a mix of user-generated responses. Meaning anyone can respond without being vetted.

Keep going

The best piece of advice I can offer for the arguments against using online search results and Stack Overflow is: “This is not the end.” Keep going past the result and research the answer. Don’t just copy and paste the code. Don’t just believe the top rated answer or blog post. Click the references sited, search the function or api calls that are in the answer, and make the research a part of your knowledge. And then give back by writing about your article or posting your own answers. Answering questions can sometimes be just as powerful a learning tool as searching for them.

In the end, anything you find through search, blog, and code sites should be considered a suggestion as one way of solving a problem – not necessarily the solution to your concern.

In the next post I will discuss a good use case for Stack Exchange sites, Developer Soft Skills Part 2: Troubleshooting.

May 08 2017
May 08

Goal: Getting PHP Sessions into Redis

One of several performance-related goals for NBA.com was to get the production database to a read-only state. This included moving cache, the Dependency Injection container, and the key-value database table to Redis.  99% of all sessions were for logged-in users, which use a separate, internal instance of Drupal; but there are edge cases where anonymous users can still trigger a PHP session that gets saved to the database.

For all it’s cool integrations with Symfony and its attempts at making everything pluggable or extendable, PHP session handling in Drupal 8 is still somewhat lacking. Drupal 8 core extends Symfony’s session handling in a way that makes a lot of assumptions, including one that developers won’t want to use any other native session handler, such as the file system or a key/value store like Redis or Memcached.

Session Handlers in Symfony and Drupal

PHP has some native session handling that’s baked in, and for basic PHP applications in simple environments, this is fine. Generally speaking, it works by storing session data in files in a temporary location on the host machine and setting a cookie header so that subsequent HTTP requests can reference the same session. However, since the default behavior doesn’t scale for everyone or meet every project’s needs, PHP offers the ability to easily swap out native session handlers. One can even create a user-defined session handler, thanks to PHP 5’s SessionHandler class.

The SessionHandler class defines some basic methods to allow a developer to create, destroy, and write session data. This class can be extended, and then ini_set('session.save_handler', TYPE) (where “TYPE” can be any of the known save handers, such as “file” or “pdo”) and ini_set('session.save_handler', PATH) (where “PATH” can be any writeable file system path or stream) can be used to tell PHP to use this extended class for handling sessions. In essence, this is what Symfony does, by extending this into a collection of NativeSessionHandler classes. This allows Symfony developers to easily choose PDO, file, or even memcached session storage by defining session handler methods for each storage mechanism.

Symfony-based applications can normally just choose which PHP session handling is desired through simple configuration. This is well-documented at Symfony Session Management and Configuring Sessions and Save Handlers. It’s even possible to create custom session handlers by extending the NativeSessionHandler, and using ini_set() inside the class’ constructor. There is no default Redis session handler in Symfony, but there are plenty of examples out there on the Internet, such as http://athlan.pl/symfony2-redis-session-handler/

Drupal 8 extends this even further with its own SessionManager class. This SessionManager class is a custom NativeSessionHandler (PHP allows “user” as one of the session.save_handler types). As part of the SessionManager class, several optimizations have been carried over from Drupal 7, including session migration and a few other things to prevent anonymous users from saving an empty session to the database. Because of these optimizations, however, we don’t want to simply ignore this class; however, the NativeSessionHandler service has the database connection injected into it as a dependency. This means future attempts to simply extend Drupal’s NativeSessionHandler service class will result in vestigial dependency injection.

Implementation

Now that we understand a little more about the underpinnings of session handling in PHP, Symfony, and Drupal 8, I needed to determine how to tell Drupal to use Redis for full session management. Several important goals included:

  • Keep all of Drupal 7 and 8’s optimizations made to session handling (which originated in Pressflow).

  • Don’t patch anything; leave Drupal core as intact as possible, but not rely on the core behavior of using the database for session storage.

  • Leverage the Redis module for connection configuration and API.

Just Override the Core Session Service?

One option that was considered was to simply override the Drupal core service. In core.services.yml the session_manager service is defined as using the Drupal\Core\Session\SessionManager class. In theory, a simple way to change Drupal’s database-oriented session handling would be to just replace the class. In this way, we would simply pretend the SessionManager class didn’t exist, and we would be able to use our CustomRedisSessionManager class, which we would write from scratch.

However, there are a few flaws in this plan:

  • We would have to reimplement all session handler methods, even if nothing differed from Drupal’s core class methods, such as session_destroy().

  • If Drupal core changed to include new or modified session handling, we would likely have to reimplement these changes in our custom code. Being off of the upgrade path or not being included in any future security fixes would be a Bad Thing™.

For more information about the proper way to override a code service in Drupal 8, see https://www.drupal.org/node/2306083

Enter: Service Decoration

For the purpose of this blog post, I will briefly introduce service decorators; but for a more general, in-depth look, a good resource to learn about Service Decorators is Mike Potter’s blog post, Using Symfony Service Decorators in Drupal 8. This is what I used as the basis for my decision to decorate the existing core session_handler service rather than overriding it or extending it.

What is a Service Decorator?

Service decoration is a common pattern in OOP that lets developers separate the modification of a service or class from the thing they’re modifying. In a simple way, we can think of a service decorator as a diff to an existing class. It’s a way to say, “hey, still use that other service, but filter my changes on top of it.”

Decorating the Session Manager Service

Symfony paves the way for services in Drupal 8, and carries with it several other design patterns, including service decorators. To decorate an existing service, you simply define a new service, and use the `decorates` key in your `MODULE.service.yml` file.

For the Redis Sessions module, here is `redis_sessions.service.yml`:


  1. services:

  2. # Decorate the core session_manager service to use our extended class.

  3. redis_sessions.session_manager:

  4. class: Drupal\redis_sessions\RedisSessionsSessionManager

  5. decorates: session_manager

  6. decoration_priority: -10

  7. arguments: ['@redis_sessions.session_manager.inner',

  8. '@request_stack', '@database', '@session_manager.metadata_bag',

  9. '@session_configuration', '@session_handler']


The `decorates` key tells Symfony and Drupal that we don’t want use this as a separate service; instead, continue to use the core session_manager service, and decorate it with our own class. The `decoration_priority` simply adds weight (or negative weight, in this case) to tell Drupal to use our service above other services that also might try and decorate or override the session_manager class.

The `arguments` key injects the same dependencies as well as the original session_manager service as a sort of top-level argument. In this way, we can still use the session_manager as the service that handles PHP sessions, and it will have all of its necessary dependencies injected into it directly by our service class. This will also inject that service into our class in case we need to reference any session_manager methods, and treat them as a _parent class method.

For the same module, here is the `RedisSessionsSessionManager.php` class constructor:


  1. public function __construct(SessionManager $session_manager,

  2. RequestStack $request_stack, Connection $connection, MetadataBag

  3. $metadata_bag, SessionConfigurationInterface $session_configuration,

  4. $handler = NULL) {

  5. $this->innerService = $session_manager;

  6. parent::__construct($request_stack, $connection, $metadata_bag,

  7. $session_configuration, $handler);

  8. $save_path = $this->getSavePath();

  9. if (ClientFactory::hasClient()) {

  10. if (!empty($save_path)) {

  11. ini_set('session.save_path', $save_path);

  12. ini_set('session.save_handler', 'redis');

  13. $this->redis = ClientFactory::getClient();

  14. }

  15. else {

  16. throw new \Exception("Redis Sessions has not been configured. See

  17. 'CONFIGURATION' in README.md in the redis_sessions module for instructions.");

  18. }

  19. }

  20. else {

  21. throw new \Exception("Redis client is not found. Is Redis module

  22. enabled and configured?");

  23. }

  24. }


In RedisSessionsSessionManager.php, we define the `RedisSessionsSessionManager` class, which will decoration Drupal core’s `SessionManager` class. Two things to note in our constructor is that:

  1. We set $this->innerService = $session_manager; to be able to reference the core session_manager service as an inner service.

  2. We check that the module has the necessary connection configuration to a Redis instance, and if so, we’ll use ini_set to tell PHP to use our Redis-based `session.save_path` and `session.save_handler` settings.

Everything Else is Simple

In our RedisSessionsSessionManager class, there’s just a few things we want to change from the core SessionManager class. Namely, these will be some Drupal-specific optimizations to keep anonymous users from creating PHP sessions that will be written to Redis (originally, the database), and session migration for users that have successfully logged in (and may have some valuable session data worth keeping).

We also have to some extra things to make using Redis as a session handler easier. There are a few new methods that Redis Sessions will use to make looking up session data easier. Since Redis is essentially just a memory-based key-value store, we can’t easily look up session data by a Drupal user’s ID. Well, we can, but it’s an expensive operation, and that would negate the performance benefits of storing session data in Redis instead of the database.

With these custom methods aside, everything else just relies on PHP’s native session handling. We’ve told PHP to use the base Redis PHP class as the handler, which is just part of having Redis support compiled in PHP. We’ve told PHP where to save the session data; in this case, a TCP stream to our redis instance configured for the Redis module.

Bonus

As of the writing of this blog post, I’ve begun the process of releasing Redis Sessions as a submodule of the Redis module. This can help serve as both a practical example of creating a service decorator as well as helping high-traffic sites that also wish to serve content from a read-only database. For those that would like to help test the module, here is the patch to add Redis Sessions submodule to the Redis module.

Want to read more about Drupal 8 architecture solutions and how to evaluate each solution based on your business objectives? Download our whitepaper here

 

Apr 25 2017
Apr 25

The original purpose of the Features module was to “bundle reusable functionality”. The classic example was a “Photo Gallery” feature that could be created once and then used on multiple sites.

In Drupal 7, Features was also burdened with managing and deploying site configuration. This burden was removed in Drupal 8 when configuration management became part of Core, allowing Features to return to its original purpose.

But, as the primary maintainer of the Features module, I sadly admit that:

“Features does not actually accomplish the goal of creating truly reusable functionality.”

Let’s look more closely the classic “Photo Gallery” example. Export your Gallery content type, your Image field storage and instance, your View and Image Style into a Feature module. You can copy this module to another site and install it to create a Gallery. But what happens if your other site already has an Image field you want to share? What happens when the namespace used for the features on your new site is different from the namespace of your original site? What happens if you want to add the gallery to an existing content type, such as a Blog?

The problem with configuration in Drupal is that it is full of machine names: content types, fields, views, dependencies, etc. You are supposed to prepend a unique namespace to these machine names to prevent conflicts with other modules and project, but that means you are stuck with that namespace when trying to reuse functionality. When you make a copy of the feature module and change all of the machine names, it becomes difficult to update the original feature with any improvements that might be made on the new project.

Basically, your Feature is not actually a reusable component.

Feature Templates

Towards the end of Open Atrium development in Drupal 7, we started using an architecture that allowed reusable functionality to be layered across multiple content types. The Related Content feature added Paragraph bundles but had no opinion about which content type you added these paragraphs to. This was accomplished using the Features Template module in D7, which allowed you to create a template of configuration and use it to create multiple instances of that same configuration across multiple content types. Until now, there was no way to reuse configuration like that in Drupal 8.

Introducing: Config Actions

The new Config Actions module helps to solve this problem and provides a replacement for both the Features Template and Features Override modules for Drupal 8. Config Actions is a plugin-driven module that simply does the following:

  • Load configuration from a source

  • Transform the config data and perform optional string replacements.

  • Save the new data to a destination

These actions are read from YAML files stored within your custom module config/actions folder. When your module is enabled, each action is executed, allowing you to easily manipulate configuration data without writing any code. If you want to write code, you can use the Config Actions API to easily manipulate configuration within your own update hooks and other functions.

Creating templates

Let’s take the “Photo Gallery” example and build a template that can be used by Config Actions:

  1. Use Features to export the configuration (content type, fields, views, etc) into a custom module (custom_gallery).

  2. Move the YAML files from the config/install folder into a config/templates folder.

  3. Edit the YAML files and replace the hard-coded machine names with variables, such as %field_name% and %content_type%.

  4. Create a Config Actions YAML file that loads configuration from these template files, performs string replacement for the variables, then saves the configuration to the active database store.

One of the edited feature configuration template files (field.storage.node.image.yml) would look something like this:




  1. langcode: en

  2. status: true

  3. dependencies:

  4. module:

  5. - file

  6. - image

  7. - node

  8. id: node.field_%field_name%

  9. field_name: field_%field_name%

  10. entity_type: node

  11. type: image

  12. ...


The resulting Config Action rule looks like this:




  1. replace:

  2. "%field_name%": "my_image"

  3. "%content_type%": "my_gallery"

  4. actions:

  5.  field_storage:

  6. # name of yml file in config/templates folder

  7.   source: "field.storage.node.image.yml"

  8.   dest: "field.storage.node.%field_name%"

  9.  field_instance:

  10.   source: "field.field.node.gallery.image.yml"

  11.   dest: "field.field.node.%content_type%.%field_name%"

  12.  content_type:

  13.   source: "node.type.gallery.yml"

  14.   dest: "node.type.%content_type%"

  15.  view:

  16.   source: "views.view.gallery.yml"

  17.   dest: "views.view.%content_type%"

  18. ...


Not only does Config Actions perform string replacements within the actual YAML configuration template files, but it also replaces these variables within the action rule itself, allowing you to specify a dynamic destination to save the config.

Enabling the above module will do the same thing as enabling the original Gallery feature, but instead of creating a “gallery” content type, it will create a “my_gallery” type, and instead of a “image” field it will create a “my_image” field, etc.

Reusing a Template

By itself, this isn’t much different from the original feature. The power comes from reusing this template in a different module.

In your “myclient” project, you can create a new custom module (myclient_gallery) that contains this simple Config Action file:




  1. replace:

  2. "%field_name%": "myclient_image"

  3. "%content_type%": "myclient_gallery"

  4. plugin: include

  5. module: custom_gallery


This will cause Config Actions to include and execute the actions from the custom_gallery module created above, but will use the new string replacements to create a content type of “myclient_gallery” with a field of “myclient_image”.

The “custom_gallery” module we created above has become a reusable component, or template, that we can directly use in our own client projects. We can control the exact machine names being used, reuse fields that might already exist in our project, and customize the gallery however we need for our new client without needing to fork the original component code.

If our new client project makes improvements to the core gallery component, the patches to the custom_gallery template module can be submitted and merged, improving the component for future client projects.

Overriding Configuration

Running actions is similar to importing configuration or reverting a Feature: the action plugins manipulate the config data and save it to the active database store. Any additional imports or actions will overwrite the transformed config data. These are not “live” (runtime) overrides, like overriding config in your settings.php file in D7 or using the Config Override module in D8. The configuration stored in the database by Config Actions is the active config on the site, and is available to be edited and used in the Drupal UI just like any Feature or other imported config.

For example, here is a simple “override” action:




  1. source: node.type.article

  2. value:

  3.  description: "My custom description of the article content type"

  4.  help: "My custom help of the article content type"


When the destination is not specified, the source is used.  The “value” option provides new config data that is merged with any existing data. This action rule just changes the description and help text for the “article” content type. Simple and easy, no Feature needed.

Config Action Plugins

Plugins exist for changing config, deleting config, adding new data to config, and you can easily create your own plugins as needed.

The Source and Destination options also use plugins. Plugins exist for loading and saving config data from YAML files, from the active database store, or from simple arrays, and you can create your own plugins as needed.

For example, the above “override” action could be rewritten like this:




  1. source: [

  2.   description: "My custom description of the article content type"

  3.   help: "My custom help of the article content type"

  4. ]

  5. dest: node.type.article


This specifies the configuration data directly in the source array and is merged with the destination in the active database store.

Conclusion

Config Actions addresses many different use-cases for advanced configuration management, from templates to overrides. You can use it to collect all of your site help/description text into one place, or to create a library of reusable content components that you can use to easily build and audit your content model.  Developers will appreciate the easy API that allows configuration to be manipulated from code or via Drush commands. I look forward to seeing all the the different problems that people are able to solve using this module.

To learn more about Config Actions, read the extensive documentation, or come to my Birds of a Feather session at DrupalCon Baltimore to talk more about advanced configuration management topics.

Apr 18 2017
Apr 18

In the Drupal 7 days, it was pretty common for a production deployment to include the (in)famous “drush fra” line to bring in any new/updated config. Because of that, lots of former Drupal 7 developers are bringing that practice to Drupal 8. But things have changed, and this is generally a bad idea in D8.

Why shouldn’t I use Features on Production in Drupal 8?

When deploying from QA to Prod, you have tested the full site config on the QA environment (right?) and you want to mirror that onto Prod. Running drush features-import-all only handles the config that is Featurized, and a site typically contains a lot of config that you don’t have in a Feature.

Because of that, just doing drush features-import-all doesn’t ensure that Prod is a mirror of QA, thus you can have bugs/regressions/etc.

Ok, then how do I push config to the Production site?

Instead of using Features, you’ll want to do a full configuration export from the QA site (i.e., the site that has the exact configuration that you want to push to production and has been pre-release tested) to the production site.

Modify your settings.php to set the location of your config/sync folder and add that folder to your git repository. On Stage/QA, use drush config-export to export the site config and commit/push the config to git. On Production, pull from git, then use drush config-import to import the full configuration.

But what about pushing config to other sites, like Local/Dev/QA?

To update config on any non-production sites, it’s fine to use Features. Make sure all configuration is packaged into Features modules, then run drush features-import on each module that you’d like to import.

Note that drush features-import-all is supported but not recommended. It’s better to import config only from the specific modules that you care about.

What about update hooks?

It’s sometimes a good idea to write an update hook to import your configuration, so that other developers or automated builds can pull in updated config without having to import ALL Features. The Features module includes a helper function to import/revert your custom module. However, always be sure to test module_exists(“features”) before calling this since not every environment might have Features enabled.

Why even use Features at all if it can’t be used everywhere?

What’s the point of using Features if it can’t be used in Production? Why not just use config-export and config-import on ALL environments and take Features out of the picture?

Core configuration management allows you to synchronize the full configuration between two sites. While it is important to have QA and Production be synchronized, you rarely want to fully synchronize your local Development environment with QA or Prod. Often your local Dev has many configuration settings that you wouldn’t want to export, or wouldn’t want to be overwritten or lost by importing another site’s config.

Features is intended to organized related configuration, and to help build reusable functionality. It isolates development changes to custom modules, allowing multiple developers to merge their work more seamlessly.

Rule of Thumb:

Use config-export and config-import when you want two sites to be identical. Use Features for all other cases.

What about config that needs to be different on each environment?

For example, say you have a piece of config on staging that needs to slightly different than production. You wouldn’t want to do a full config export to production since it would overwrite that value.

In a case like that, perhaps use per-environment settings.php to set that config, or maybe look at using the State API for that piece of config.

The config-export command also has an option for excluding certain configuration that you might not want to export/import.

Are there cases where you DO want to use Features in Production?

It’s possible. One example is the use-case of multi-site where you might have dozens of different site configs and can more easily manage that with different features on the site rather than using config-export and config-import.

Where can I find more info about this?

Here are some links:

Apr 12 2017
Apr 12

When organizations provide omni-channel solutions tailored to individual users, managing multiple authentication portals and authorization stores can be tricky. Users have to sign into multiple interfaces and remember credentials for those applications. These in turn are managed by an IT team with the responsibility of ensuring systems not only function but are secure. This task becomes exponentially more difficult and expensive as services are added to the platform.

The answer for this problem is a single sign-on (SSO) solution that allows a user to sign in once to a central authentication provider, which automatically authenticates the user to other services in the ecosystem. SSO options range from custom solutions to proprietary and open source technologies. If you’re working in academia, however, a common choice is Central Authentication Service (CAS).

CAS is a SSO service that can either be integrated into or replace your web application authentication. It is an open source project with roots at Yale University and exits today as a widely used protocol for SSO in the educational space. CAS itself isn’t involved in storing credentials. It serves as a common interface for a number of available authentication handlers, including databases, LDAP, RADIUS, OAuth, OpenID and more.

The process starts when a user visits a CAS client (aka a web application configured to work with CAS). When a user wants to login to the client app, they are redirected to the CAS login endpoint where they must authenticate. Upon successful authentication, the user is redirected back to the client app with what is known as a CAS Ticket Granting Ticket (CASTGT) and a Service Ticket (ST). The client app then sends a second request to CAS to validate the tickets. CAS sends a reply to the client with the user ID and a success message. With this information, the client app logs the user in.

If some of the logic in the workflow looks familiar, it might be because CAS is modeled after the Kerberos protocol.

CAS Architecture

There are several pre-packed CAS client libraries for various programming languages you can leverage within your client app. Packages exist for .NETJavaApachePHP and more.

For those working in Drupal, we are fortunate not to have to build the toolset to communicate with CAS by hand or try to leverage the phpCAS library. Instead, we have the CAS module. Out of the box the CAS module gives us everything we need to get a user authenticated to our Drupal site with CAS as the authentication portal. Today I’ll be talking specifically about how to get CAS going in Drupal 8 – but never fear, there is a Drupal 7 version of the module too!

Download the 8.x-1.x-dev version of the CAS module from https://drupal.org/project/cas, unless there is a tagged D8 release, in which case use that instead. Install the module.

Once the CAS module is installed, head to /admin/config/people/cas to configure it. This configuration form can seem overwhelming at first given the amount of options, so let’s cover each of them a briefly.

  • Version: This is the version of the CAS protocol that your site will use to communicate with the CAS endpoint. Use the one that matches your CAS server configuration.
  • Hostname: The hostname of your CAS server
  • Port: The port your CAS server is listening on
  • URI: The path on the server that represents your primary CAS endpoint, typically “/cas”
  • SSL Verification: Options for dealing with SSL verification. If your CAS server is using a self-signed certificate you will need to choose the “Do not verify CAS server” option, but be aware that this weakens your security.
  • Gateway: Logs a user into Drupal if they are already logged into CAS. The CAS server will not “paint” the user logon screen but seamlessly log the user in using their existing CASTGT.
  • Forced Login: Unlike the Gateway service which checks if you are logged in, forced login will ensure that you are. We can limit the conditions to pages much like a block can.
  • Auto Register Users: Automatically creates Drupal users for CAS authenticated users if they don’t already have an account
  • Drupal Logout Triggers CAS Logout: When you logout of Drupal, you log out of CAS as well. The site will send CAS your logout request and you will be logged out of Drupal at the same time.
  • Enable single log out: When you logout in CAS, you logout of Drupal. This is not a recommended setting since it allows your session to be stored un-hashed.
  • Proxy: Leverages the proxy features in CAS
  • Debugging: Logs debugging information to Drupal watchdog. It is very handy when developing but should be enabled on an as-needed basis in production due to the log spam it can create.

Lets assume for the sake of this discussion that we have configured our Drupal site as such:

  • Gateway: disabled
  • Forced Login: disabled
  • Auto register users: disabled

As an anonymous user, visit your Drupal site and try to login with the above settings. You will be presented with the core Drupal login form. So why aren’t we being redirected to CAS for authentication? To find that answer we need to understand a bit more how the CAS module works to in D8.

Recall that our configuration has the forced login and gateway options disabled, so CAS won’t be invoked on a user by force. But by visiting the /user/login path you might expect to be redirected to the CAS server endpoint. However, the CAS module doesn’t work that way. It provides its own path to trigger a CAS authentication workflow.

Instead of modifying the /user/login route path, the CAS module provides a new route at /caslogin. The controller for this route is responsible for creating the redirect to CAS. The redirect specifies the location of the CAS server as well as the url of the client app that CAS should send the user to after authenticating. To accomplish this the HTTP response includes a query string parameter, service, appended to the location value of the header.

└╼ curl -I http://example.com/caslogin

HTTP/1.1 302 Found

Server: Apache/2.4.16 (Unix) PHP/5.5.31 OpenSSL/0.9.8zg

Cache-Control: must-revalidate, no-cache, private

Location: https://cas.example.com:443/cas/login?service=http%3A//example.com/cass…

Content-language: en

Content-Type: text/html; charset=UTF-8

The CAS module exposes another route at /casservice. This route controller is responsible sending the CASTGT and ST tickets to CAS for validation and logging in the user to Drupal. With a successful authentication response from the CAS server, the module determines if the now-authenticated CAS user should be logged into Drupal. When doing so, the module maps the user ID from CAS as the Drupal user name. So if CAS returns a user ID of xyz123 along with the success message, that name is used as the Drupal user name during processing. In our case, the Drupal user xyz123 would need to exist in order to be logged in. To have xyz123 be automatically logged in as a new user, we would have to enable the auto register users option in the CAS configuration form.

To make the forced login and gateway features work, the CAS module adds an event listener to every page request. In the event handler, CAS triggers the gateway and forced login condition checks and returns a redirect response if the user needs to be sent to CAS for authentication:

$event->setResponse(new CasRedirectResponse($cas_login_url));

The CAS module adds its own events that other modules can subscribe to that “hook” into it’s workflow:

  • CasPreAuthEvent: Allows modules to change data received from CAS before attempting to load a Drupal user
  • CasUserLoadEvent: Modules can prevent users from logging in or can modify user attributes before they are saved to the Drupal account

As we have seen, CAS is a versatile SSO solution that works with authorization stores like LDAP or Oauth to provide a common portal for integrated touch points. And CAS client apps can interact with the CAS service using existing libraries like phpCAS, or leverage the CAS module if working in Drupal. We looked at how to configure the CAS module to work in Drupal 8 without writing a single line of code. Hopefully you now have enough knowledge to pursue your own CAS project, whether it be in Drupal or any other web application.

Oct 25 2016
Oct 25

Drupal 8 is more modular and customizable than ever before via plugins and the use of services. With the plugin system it is easy to subclass existing base plugins to add functionality, such as custom blocks, forms, filters, and much more. But how do you customize a service?

When writing the new Features module for Drupal 8, we needed a way to modify the existing  ConfigInstaller service in Drupal core to allow pre-existing configuration to be installed on a site (allowing a feature to be installed on the same site that created it). All we needed was a small change to the findPreExistingConfiguration() method of the service.

Replacing a Service

Our first attempt involved completely replacing the normal config.installer service with our own subclass via altering the ServiceProvider:


  1. class FeaturesServiceProvider extends ServiceProviderBase {

  2. public function alter(ContainerBuilder $container) {

  3. $definition = $container->getDefinition('config.installer');

  4. $definition->setClass('Drupal\features\FeaturesConfigInstaller');

  5. }

  6. }


The problem with this method is if you want to install another module that also needs to alter the  ConfigInstaller service. Who wins?

Introducing Decorators

The Symfony framework provides a mechanism known as service decorators that allow you to chain services together and override its behavior without completely replacing the service. All you need to do is add an argument to your new service that points to the parent service in your module.services.yml file.


  1. mymodule.myservice:

  2. class: Drupal\mymodule\MyService

  3. public: false

  4. decorates: parent.service

  5. decoration_priority: 3

  6. arguments: ['@mymodule.myservice.inner', ...]


Notice the “decorates” key which identifies which service is being overridden, and the  @<em>mymodule.myservice</em>.inner argument being passed to the new service. The “decoration_priority” indicates the priority of the override with higher priorities running first.

Now we can support multiple modules that all override the same service. Imagine modules A and B that both override the config.installer service by providing their own decorators each with their own priority.

A new Constructor

In order to accept the new “inner” argument, you need to write a new _construct() method in your service class. You’ll want to save the previous service instance so you can call it elsewhere in your code.


  1. public function __construct(WhateverServiceInterface $inner_service, $other_args...) {

  2. $this->innerService = $inner_service;

  3. parent::__construct($other_args...);

  4. }


When you implement the other methods in your service and want to call a public function from the base class, instead of using  $this->method() you simply use  $this->innerService->method().

Interfaces vs. Subclassing

The key part of using service decorators is that your new service needs to have the same class interface as the existing service. In many examples, this is shown literally as a new service that implements a specific interface.  For example:

class MyService implements WhateverServiceInterface {}

However, the problem with this is that you’ll need to implement every method specified in the interface, duplicating much of the code from the existing service (unless your service really does need to do something completely different). You’ll see other examples showing a lot of this:


  1. public function someMethod() {

  2. $this->innerService->someMethod();

  3. }


just to duplicate the existing functionality of the service.

In the case of Features, we didn’t want to re-implement the entire ConfigInstaller service, we just wanted to override a small piece of it. Fortunately, you don’t need to create an entirely new class, you can just subclass the existing service:

class MyService extends ParentServiceClass {}

Now you only need to implement the methods you actually want to change. You’ll still use  $this->innerService to call any public functions within your methods, but you don’t need to re-implement every public method.

As an alternative to using  $this->innerService everywhere, you can use the magic __call() method within your new class:


  1. public function __call($method, $args) {

  2. }


This will intercept all method calls not defined in your new service and redirect them to the  innerService.

Public vs Protected Methods

The tricky part for Features was that the  findPreExistingConfiguration() method we wanted to override is actually a protected method and calls other protected methods. Using a subclass of the existing service we can easily override a protected method, but what about calling  $this->innerService? The innerService can only access the public functions in the interface and cannot be used to call other protected or private methods.

We decided to just give it a try to see what would happen. As expected, our overridden protected method completely replaced the behavior of the core service. Because it didn’t use the  innerService argument, any additional module that also decorated the config.installer service also got the overridden protected method added by Features, as long as the  decorator_priority of Features was higher than the other module.

This is exactly what we wanted! When overriding the protected method and not using  innerService, you cannot have two decorators override the exact same method. But the two decorators still work fine together when they override different methods. While not as perfect as clean decorators it was still a much better solution than completely swapping the service using the ServiceProvider::alter() method. We added this to the 8.x-3.0-rc1 release of Features.

Conclusion

I created a d8_decorators github repository to demonstrate various different decorators and how they can be chained together and how they can override different methods or the same methods of core services. Feel free to play with enabling different modules to see the results.

What we learned is Symfony decorators are another powerful way to modify and extend Drupal 8. They can be used in more ways than perhaps intended via subclassing existing services and even to override protected service methods.

Jun 18 2015
Jun 18

Developer Soft Skills

One of my earliest jobs was customer service for a call center. I worked for many clients that all had training specific to their service. No matter the type of training, whether technical or customer oriented, soft skills were always a included. Margaret Rouse said, “Soft skills are personal attributes that enhance an individual’s interactions, career prospects and job performance. Unlike hard skills, which tend to be specific to a certain type of task or activity, soft skills are broadly applicable.”

In this blog series I will be discussing what I call “developer soft skills.” The hard skills in development are (among others) logic, languages, and structure. Developer soft skills are those that help a developer accomplish their tasks outside of that knowledge. I will be covering the following topics:

  • Online research
  • Troubleshooting
  • Enhancing/Customizing
  • Integrating
  • Architecting

Part 1: Online Research

One of the first skills a developer should master is online researching. This is an area with some controversy (which will be discussed later) but a necessary skill for learning about new technologies, expanding your knowledge, and solving problems.

One of the best reasons for research is continuous education. For many professions (such as the military, education and medical fields) continuing education is required to keep up on updated information, concepts, and procedures. As a developer, continuing to grow our skill set helps us develop better projects by using better code, better tools, and better methods.

Search engine queries

When researching a topic on the internet it usually involves using a search engine. Understanding how a search engine works and how to get to the results.There are two parts to how a search engine works. Part one is data collection and indexing. Part two is searching or querying that index. I will be focusing on how to write the best possible query, to learn more about how search collect and index data see this link. In order to write good queries we should understand how search engines respond to what we type into the search box. Early search results were rendered based on simple (by today’s standards) comparison of search terms to indexed page word usage and boolean logic. Since then search engines have started to use natural language queries.

So we can get better results by using this to our advantage. If I wanted to research how to make a calendar with the Java programming language. instead of searching for keywords and distinct ideas “java -script calendar” by them selves; use natural language to include phraseology and context in our queries: “how can I make a calendar with java”. The first result from the keyword search returns a reference to the Java Calendar class. The first result from the second query return example code on writing a calendar in Java. The better the query the better the results.

Search result inspection

Once we have the right query we can then turn our attention to the results. One of the first things I do is limit the results to a date range. This prevents results from the previous decade (or earlier) to be displayed with more recent and applicable ones. Another way to focus our search is to limit the site that the search takes place on. If we know we want to search for a jQuery function search jquery.com.

date_search

Once we have filtered our results, it’s time for further inspection. When viewing a results page, the first thing I look for is the context of the article or post. Does the author and/or site have a lot of ads? This can sometimes mean that the site is more about making money then providing good answers. Does the page have links or other references to related topic or ideas? This can show if the author is knowledgeable in the subject matter.

The controversy

Earlier I mentioned online researching can be a controversial topic. One of the points of controversy is discussed in Scott Hanselman’s blog post, Am I really a developer or just a good googler? While I agree with his major point, that researching bad code can be dangerous, I contend that using a search engine can produce good results and learning opportunities.

Almost anytime you search for any programming topic, one site or group of sites is predominant in almost every result: Stack Overflow or the Stack Exchange group of sites. Several articles have been written about reasons not to use, consequence of using and why some developers no longer use Stack Overflow. Using Stack Overflow will not solve all your problems or make you a better developer.

Again, these arguments make some good points. But I think that using Stack Overflow correctly, just like good use of search engines, can produce good results. Using a Stack Exchange site comes with the benefit of community. These sites have leveraged Stack Exchange Q&A methodology for their specific topic or technology and can be a great resource on how to solve a problem within the bounds of that community. One of my development mentors told me that there were thousands of ways to solve a programming problem and usually several wrong ones. The key is to not do one of the wrong ones and try to find one of the best ones. Searching within a Stack exchange site for answers can highlight the wrong ones but also provide the ones that work best in that system.

Here is an example of a Stack Overflow Drupal community response that came up when I searched for: “drupal create term programmatically.”

stack_overflow

This response is correct, but if you look at the link provided, you will see this is for Drupal 6. If you were looking for how to do this in Drupal 7, for instance, the answer provided would not be correct. We could have improved our results by adding “Drupal 7? to our query. But most important is to keep in mind that sites like Stack Overflow, or other community sites such as php.net include a mix of user generated responses. Meaning anyone can respond without being vetted.

Keep going

The best piece of advice I can offer for the arguments against using online search results and Stack Overflow is: “This is not the end.” Keep going past the result and research the answer. Don’t just copy and paste the code. Don’t just believe the top rated answer or blog post. Click the references sited, search the function or api calls that are in the answer, and make the research a part of your knowledge. And then give back by writing about your article or posting your own answers. Answering questions can sometimes be just as powerful a learning tool as searching for them.

In the end, anything you find through search, blog, and code sites should be considered a suggestion as one way of solving a problem – not necessarily the solution to your concern.

In the next post I will discuss a good use case for Stack Exchange sites, Developer Soft Skills Part 2: Troubleshooting.

Subscribe to our newsletter to keep up with new projects and blogs from the Phase2 team!

Jun 04 2015
Jun 04

atrium-logo-1Since it’s beginning, Open Atrium had Discussion Forums, allowing members to collaborate on various topics.  In the Winter 2015 2.3x release, we added Related Content, which allowed you to attach a Discussion Post to other content, such as an Event.  But what if you wanted to have a discussion around a piece of content directly without creating a separate related discussion post?  In the new Spring 2.4x release, you can “Reply to Anything,” whether it’s core content such as an Event or a custom site-specific content type.

Drupal Comments

At a high level, the “Reply to Anything” release of Atrium was a simple task of enabling normal Drupal comments for any content type.  The Atrium Discussion forum didn’t use Comments, but instead used the same content type for “Replies” as for the “Original Post.”  While this was architecturally convenient and allowed Replies to contain attachments, it didn’t allow Replies to be added to other content types easily.

Comments in Drupal tend to get a bad rep.  Many feel that comments look ugly, don’t support rich content such as attachments, or are subject to spam and require serious moderation.  The challenge for Atrium was to enable Comments while dealing with some of these complaints.

Improving Comments

Open_Atrium_comment_screenshotSignificant testing and feedback went into the original design of the Atrium Discussion forums.  We decided to implement the same functionality for Comments, plus some new features:

  1. Personalization: causing new comments to be auto-expanded, while old comments are collapsed.
  2. Attachments: rather than just allowing attachments to be added to Comments, the entire Related Content paragraphs architecture was re-used.  More on this below.
  3. Migration: previous Discussion Replies are migrated automatically into Comments.
  4. Easy Administration: rather than editing the content type to enable Comments, a central UI interface is used to choose which content types use comments.
  5. Threaded Discussions: support comments to comments, allowing fully threaded discussions.

The result is a consistent and intuitive interface across Atrium for handling comments to content, whether it’s a Discussion post, a worktracker Task, an Event, a Space, or any other type of content.

Rich Comment Content

Re-using the Related Content work from v2.3x, we were able to support very rich comment content.  For example, the screenshot in the previous section shows a comment with an image and two columns of text.  Rather than just using the WYSIWYG to embed an image, that comment uses the Media Gallery paragraph type to add the image, along with a Text paragraph to add two columns of text.  You can even use the Related Content to embed another live discussion along with it’s own comments and reply form within another comment.  Comment Inception!   In the past you could only add a file attachment to a Reply.  With Related Content you can add a Related Document to a Comment, which might be a file attachment, but might also be just a Wiki-like web document.
Open_Atrium_comment_3_screenWhen integrating the Related Content we also did a large amount of UX improvement.  The different paragraph types are now represented with icon “tabs” along the bottom of the WYSIWYG editor, much like the tabs at the bottom of your Facebook status field.  Using a Drupal hook you can even specify icons for your own custom paragraph types!  This new UX for Related Content paragraphs was taken from Comments and then extended to work on the Body of the node/edit form, providing a consistent Related Content experience across all of Atrium.  You can separately control which paragraph types are available for the node Body vs available for Comments.

What can I do with all this?

Technical features are fine, but it’s really all about the client needs that can be solved.  Here are some of the use-cases you can solve now using Atrium:

  1. Feedback and Collaboration on Anything:  Threaded discussions on any content type, not just the Discussion posts, without needing to use Related Content.  Because of Atrium’s strong data privacy controls, comments are added by Members of a Space and are less subject to spam or needing moderation.  However, full comment approval moderation is also still available.  Comment threads can be open or closed on a per-node basis.
  2. Social Feeds: Enable comments on Space, Section, or even Team pages, providing a “Status Feed” functionality.  Users can quickly and easily post comments (status updates) and have them appear in the Recent Activity.  If you enable comments on User Profiles (from the Open Atrium Profile2 app), you can even support the concept of a “Facebook Wall” where users can post comments (status) on a specific user’s profile dashboard.  These are areas still requiring some improvements to the UX that you will see in future versions of Atrium to make this a more useable social experience, but you can get started with it now.
  3. Fieldable Comments:  By adding new paragraph entity bundles, you are essentially adding optional fields to comments.  Developers can define templates to control the edit and view experience for custom fields.  Using the included Comments Alter module, comments can actually change the values of fields on the parent content node, such as the Status, Type, and Priority fields on the worktracker Task content.
  4. Email Integration: As with past Discussion Replies, adding a Comment causes a notification email to be sent.  Users can reply to the email and the reply will be posted back to the Atrium site.  This now works with any comments on any content type, such as replying to comments from a worktracker Task.

Conclusion

Many users of Atrium have asked for comment support, which was specifically disabled in past versions.  Now Atrium fully supports the Drupal Comment system and everything sites want to do with it.  Integrating the Recent Content work into Comments provides powerful functionality that is implemented consistently and intuitively across the entire platform.  Allowing Comments on anything further pushes the core mission of Atrium to enable and enhance collaboration across your organization.

Want to learn more or see the new comments in action?  Sign up for the Open Atrium Spring Release webinar on Thursday, June 4th, at 12:30pm EST.

Jun 03 2015
Jun 03

atrium-logo-1Since it’s beginning, Open Atrium had Discussion Forums, allowing members to collaborate on various topics.  In the Winter 2015 2.3x release, we added Related Content, which allowed you to attach a Discussion Post to other content, such as an Event.  But what if you wanted to have a discussion around a piece of content directly without creating a separate related discussion post?  In the new Spring 2.4x release, you can “Reply to Anything,” whether it’s core content such as an Event or a custom site-specific content type.

Drupal Comments

At a high level, the “Reply to Anything” release of Atrium was a simple task of enabling normal Drupal comments for any content type.  The Atrium Discussion forum didn’t use Comments, but instead used the same content type for “Replies” as for the “Original Post.”  While this was architecturally convenient and allowed Replies to contain attachments, it didn’t allow Replies to be added to other content types easily.

Comments in Drupal tend to get a bad rep.  Many feel that comments look ugly, don’t support rich content such as attachments, or are subject to spam and require serious moderation.  The challenge for Atrium was to enable Comments while dealing with some of these complaints.

Improving Comments

Open_Atrium_comment_screenshotSignificant testing and feedback went into the original design of the Atrium Discussion forums.  We decided to implement the same functionality for Comments, plus some new features:

  1. Personalization: causing new comments to be auto-expanded, while old comments are collapsed.
  2. Attachments: rather than just allowing attachments to be added to Comments, the entire Related Content paragraphs architecture was re-used.  More on this below.
  3. Migration: previous Discussion Replies are migrated automatically into Comments.
  4. Easy Administration: rather than editing the content type to enable Comments, a central UI interface is used to choose which content types use comments.
  5. Threaded Discussions: support comments to comments, allowing fully threaded discussions.

The result is a consistent and intuitive interface across Atrium for handling comments to content, whether it’s a Discussion post, a worktracker Task, an Event, a Space, or any other type of content.

Rich Comment Content

Re-using the Related Content work from v2.3x, we were able to support very rich comment content.  For example, the screenshot in the previous section shows a comment with an image and two columns of text.  Rather than just using the WYSIWYG to embed an image, that comment uses the Media Gallery paragraph type to add the image, along with a Text paragraph to add two columns of text.  You can even use the Related Content to embed another live discussion along with it’s own comments and reply form within another comment.  Comment Inception!   In the past you could only add a file attachment to a Reply.  With Related Content you can add a Related Document to a Comment, which might be a file attachment, but might also be just a Wiki-like web document.
Open_Atrium_comment_3_screenWhen integrating the Related Content we also did a large amount of UX improvement.  The different paragraph types are now represented with icon “tabs” along the bottom of the WYSIWYG editor, much like the tabs at the bottom of your Facebook status field.  Using a Drupal hook you can even specify icons for your own custom paragraph types!  This new UX for Related Content paragraphs was taken from Comments and then extended to work on the Body of the node/edit form, providing a consistent Related Content experience across all of Atrium.  You can separately control which paragraph types are available for the node Body vs available for Comments.

What can I do with all this?

Technical features are fine, but it’s really all about the client needs that can be solved.  Here are some of the use-cases you can solve now using Atrium:

  1. Feedback and Collaboration on Anything:  Threaded discussions on any content type, not just the Discussion posts, without needing to use Related Content.  Because of Atrium’s strong data privacy controls, comments are added by Members of a Space and are less subject to spam or needing moderation.  However, full comment approval moderation is also still available.  Comment threads can be open or closed on a per-node basis.
  2. Social Feeds: Enable comments on Space, Section, or even Team pages, providing a “Status Feed” functionality.  Users can quickly and easily post comments (status updates) and have them appear in the Recent Activity.  If you enable comments on User Profiles (from the Open Atrium Profile2 app), you can even support the concept of a “Facebook Wall” where users can post comments (status) on a specific user’s profile dashboard.  These are areas still requiring some improvements to the UX that you will see in future versions of Atrium to make this a more useable social experience, but you can get started with it now.
  3. Fieldable Comments:  By adding new paragraph entity bundles, you are essentially adding optional fields to comments.  Developers can define templates to control the edit and view experience for custom fields.  Using the included Comments Alter module, comments can actually change the values of fields on the parent content node, such as the Status, Type, and Priority fields on the worktracker Task content.
  4. Email Integration: As with past Discussion Replies, adding a Comment causes a notification email to be sent.  Users can reply to the email and the reply will be posted back to the Atrium site.  This now works with any comments on any content type, such as replying to comments from a worktracker Task.

Conclusion

Many users of Atrium have asked for comment support, which was specifically disabled in past versions.  Now Atrium fully supports the Drupal Comment system and everything sites want to do with it.  Integrating the Recent Content work into Comments provides powerful functionality that is implemented consistently and intuitively across the entire platform.  Allowing Comments on anything further pushes the core mission of Atrium to enable and enhance collaboration across your organization.

Want to learn more or see the new comments in action?  Sign up for the Open Atrium Spring Release webinar on Thursday, June 4th, at 12:30pm EST.

Jun 03 2015
Jun 03

atrium-logo (1)Since it’s beginning, Open Atrium had Discussion Forums, allowing members to collaborate on various topics.  In the Winter 2015 2.3x release, we added Related Content, which allowed you to attach a Discussion Post to other content, such as an Event.  But what if you wanted to have a discussion around a piece of content directly without creating a separate related discussion post?  In the new Spring 2.4x release, you can “Reply to Anything,” whether it’s core content such as an Event or a custom site-specific content type.

Drupal Comments

At a high level, the “Reply to Anything” release of Atrium was a simple task of enabling normal Drupal comments for any content type.  The Atrium Discussion forum didn’t use Comments, but instead used the same content type for “Replies” as for the “Original Post.”  While this was architecturally convenient and allowed Replies to contain attachments, it didn’t allow Replies to be added to other content types easily.

Comments in Drupal tend to get a bad rep.  Many feel that comments look ugly, don’t support rich content such as attachments, or are subject to spam and require serious moderation.  The challenge for Atrium was to enable Comments while dealing with some of these complaints.

Improving Comments

Open_Atrium_comment_screenshotSignificant testing and feedback went into the original design of the Atrium Discussion forums.  We decided to implement the same functionality for Comments, plus some new features:

  1. Personalization: causing new comments to be auto-expanded, while old comments are collapsed.
  2. Attachments: rather than just allowing attachments to be added to Comments, the entire Related Content paragraphs architecture was re-used.  More on this below.
  3. Migration: previous Discussion Replies are migrated automatically into Comments.
  4. Easy Administration: rather than editing the content type to enable Comments, a central UI interface is used to choose which content types use comments.
  5. Threaded Discussions: support comments to comments, allowing fully threaded discussions.

The result is a consistent and intuitive interface across Atrium for handling comments to content, whether it’s a Discussion post, a worktracker Task, an Event, a Space, or any other type of content.

Rich Comment Content

Re-using the Related Content work from v2.3x, we were able to support very rich comment content.  For example, the screenshot in the previous section shows a comment with an image and two columns of text.  Rather than just using the WYSIWYG to embed an image, that comment uses the Media Gallery paragraph type to add the image, along with a Text paragraph to add two columns of text.  You can even use the Related Content to embed another live discussion along with it’s own comments and reply form within another comment.  Comment Inception!   In the past you could only add a file attachment to a Reply.  With Related Content you can add a Related Document to a Comment, which might be a file attachment, but might also be just a Wiki-like web document.

Open_Atrium_comment_3_screenWhen integrating the Related Content we also did a large amount of UX improvement.  The different paragraph types are now represented with icon “tabs” along the bottom of the WYSIWYG editor, much like the tabs at the bottom of your Facebook status field.  Using a Drupal hook you can even specify icons for your own custom paragraph types!  This new UX for Related Content paragraphs was taken from Comments and then extended to work on the Body of the node/edit form, providing a consistent Related Content experience across all of Atrium.  You can separately control which paragraph types are available for the node Body vs available for Comments.

What can I do with all this?

Technical features are fine, but it’s really all about the client needs that can be solved.  Here are some of the use-cases you can solve now using Atrium:

  1. Feedback and Collaboration on Anything:  Threaded discussions on any content type, not just the Discussion posts, without needing to use Related Content.  Because of Atrium’s strong data privacy controls, comments are added by Members of a Space and are less subject to spam or needing moderation.  However, full comment approval moderation is also still available.  Comment threads can be open or closed on a per-node basis.
  2. Social Feeds: Enable comments on Space, Section, or even Team pages, providing a “Status Feed” functionality.  Users can quickly and easily post comments (status updates) and have them appear in the Recent Activity.  If you enable comments on User Profiles (from the Open Atrium Profile2 app), you can even support the concept of a “Facebook Wall” where users can post comments (status) on a specific user’s profile dashboard.  These are areas still requiring some improvements to the UX that you will see in future versions of Atrium to make this a more useable social experience, but you can get started with it now.
  3. Fieldable Comments:  By adding new paragraph entity bundles, you are essentially adding optional fields to comments.  Developers can define templates to control the edit and view experience for custom fields.  Using the included Comments Alter module, comments can actually change the values of fields on the parent content node, such as the Status, Type, and Priority fields on the worktracker Task content.
  4. Email Integration: As with past Discussion Replies, adding a Comment causes a notification email to be sent.  Users can reply to the email and the reply will be posted back to the Atrium site.  This now works with any comments on any content type, such as replying to comments from a worktracker Task.

Conclusion

Many users of Atrium have asked for comment support, which was specifically disabled in past versions.  Now Atrium fully supports the Drupal Comment system and everything sites want to do with it.  Integrating the Recent Content work into Comments provides powerful functionality that is implemented consistently and intuitively across the entire platform.  Allowing Comments on anything further pushes the core mission of Atrium to enable and enhance collaboration across your organization.

Want to learn more or see the new comments in action?  Sign up for the Open Atrium Spring Release webinar on Thursday, June 4th, at 12:30pm EST.

May 14 2015
May 14

Last week, we were proud to announce the launch of Memorial Sloan Kettering Cancer Center’s enterprise Drupal 8 site, one of the first major Drupal 8 implementations in the U.S. One of the awesome benefits of working on this project was the opportunity to move Drupal 8 forward from beta to official release. Phase2 has been instrumental in accelerating Drupal 8, and we were excited that Memorial Sloan Kettering was equally invested in giving back to the community.

Benefits of starting early

Getting started during the beta phase of Drupal 8 meant that it wasn’t too late to fix bugs and tasks. Even feature requests can make their way in if the benefits outweigh the necessary changes to core.

Similarly, if other agencies and shops starting to use Drupal 8 are going through many of the same issues, there is more of an opportunity for collaboration (both on core issues and on contrib upgrades) than on a typical Drupal 7 project.

Blog-1-1-1024x346

By the numbers

As of this writing, 57 patches have been directly contributed and committed to Drupal 8 as part of this project. Additionally, nearly 100 issues have been reviewed, marked RTBC, and committed. Hundreds of old and long neglected issues have been reviewed and moved closer to being ready.

Often, to take a break on a particularly tricky issue, I’d switch to “Issue Queue Triage” mode, and dive into some of the oldest, most neglected corners of the queue. This work brought the oldest Needs Review bugs from ~4 years to less than 4 months (the oldest crept back up to 6 months once I started circling back on myself).

This activity is a great way to learn about all the various parts of Drupal 8. Older issues stuck at Needs Review usually need, at minimum, a substantial reroll. I found that once tagging something with Needs Reroll, there were legions of folks that swooped in and did just that, increasing activity on most issues and getting many eventually committed.

One of my favorite but uncommitted patches is adding Views integration for the Date module. It’s still qualified as Needs Review, so go forth and review! Another patch, which is too late for 8.0.0, adds a very basic draft/moderation workflow to core. This patch is another amazing example of how powerful core has become–it is essentially just UI work on top of APIs already in Drupal 8.

17577364726_a8a985bb85_o-1024x682

Porting contrib modules to Drupal 8

This project has contributed patches and pull requests for Drupal 8 versions of Redirect, Global Redirect, Login Security, Masquerade, Diff, Redis, Memcache, and Node Order.

One of the remarkable things about this project, and a testament to the power of Drupal 8, is how few contributed modules were needed. Compare some 114 contrib modules on the Drupal 6 site, to only 10 on the Drupal 8 site.

Considering Drupal 8 for your organization? Sign up for a complimentary Drupal 8 consultation with the Phase2 Drupal 8 experts

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web