Sep 25 2019
Sep 25

Yesterday the digital experience world and the Drupal community received the long awaited answer to the question: What’s going to happen with Acquia? when it was announced, first on Bloomberg that Vista Equity Partners would be buying a majority stake in Acquia which it values at $1B. 

Many were caught off guard by the timing, but an event like this had been expected for a long time. After receiving nine rounds of venture funding totaling $173.5M, it was time. As the leader and largest company in the Drupal space, Acquia has a center of gravity that leaves many asking a new question: What Now for Drupal?

What Are the Angles?

Before I attempt to answer what I think this means for Drupal and the Drupal community, I think it is worthwhile to at least speculate on the strategy Acquia plans to pursue as a part of Vista. It seems that everyone I have heard from both offline and online since the announcement yesterday are speculating on the Vista angle (i.e. why did they want Acquia?). As TechCrunch led with “Vista Equity Partners...likes to purchase undervalued tech companies and turn them around for a hefty profit…” Well that’s pretty much what a PE firm does. And to me less interesting than asking: What does Acquia want from Vista?

What I believe Acquia wanted to get out of this is a heavy weight partner with capital and connections that could help develop Acquia into a more formidable competitor to Adobe, Sitecore and other digital experience platforms (“DXP”). It was just last week that Salesforce Ventures made a very sizeable $300M investment in Automattic, the parent company of WordPress. Things are heating up with all of the top digital experience platforms and no one is going to survive, let alone stay in the front of the pack, without some serious capital behind them. 

Who Wins?

I believe Acquia plans to use Vista’s investment and resources to continue making targeted acquisitions and investments to become a more robust and powerful digital experience platform. I would expect them to grow their suite of products, invest even more heavily in sales and marketing to increase revenue and grow its installed base of customers. 

Vista will then have a more valuable asset from which to pursue either an IPO or a strategic acquisition. It is possible this will follow the pattern of Marketo, which Vista bought and then sold to Adobe for a $3B profit or Ping, which they recently took public in an IPO

So there are mutual interests being met and a fair valuation that gets the necessary attention - so both parties win. I also think customers win from increased product development, competition, and a more robust ecosystem.

What Does This Mean For Drupal?

I think this is the best of all possible scenarios for both Drupal (the product) and the Drupal community. While many will bemoan the intrusion of a large private equity firm into the sacred space of an open source community, change was inevitable and it comes with predictable tradeoffs that have to be measured in the context of a new reality for the space. The community needs the indirect investment that this deal provides and it far outweighs the alternatives. If you assume that there were only a few possible scenarios for Acquia that were going to play out sooner or later, they would be:

  1. Organic growth / status quo - In my opinion, the worst scenario due to the dynamics of the market converging. Without a huge infusion of capital like the Vista deal into Acquia, Drupal simply wouldn’t be able to compete fast enough to stay in the top DXP category against Adobe, Sitecore, Salesforce and WordPress. 

  2. IPO - As a liquidation event for VC investors, this could be perhaps the most lucrative, but the public markets are fickle and I believe that would be very hard on a large open source community and product like Drupal due to the dynamics of control for a public company. This may yet come to pass as the end game for Vista, but I think it is good it was not the immediate play. 

  3. Strategic Acquisition - Salesforce, Amazon, Google, IBM and others of this size would be likely acquirers. Again, this may yet come to pass, but it would not have been an ideal immediate short term play for Drupal because of the weight of influence it would add to the community and open source dynamic.  

  4. PE - Obviously, what did happen. This deal brings the financial strength and strategic opportunities without the messiness of the public markets or a new giant controlling the ecosystem. 

As for the direct benefits to the Drupal project, I take Dries at his word in the personal statement he made on his blog that this strategy will allow Acquia to provide even more for Drupal and the community including: 

  • Sponsor more Drupal and Mautic community events and meetups.

  • Increase the amount of Open Source code [sic] contributed.

  • Fund initiatives to improve diversity in Drupal and Mautic; to enable people from underrepresented groups to contribute, attend community events, and more.

Those are all things that directly benefit the community and make open source Drupal better in addition to the opportunities that the deal affords Acquia to better compete against its rivals. 

How Things Line Up From Here…

Consolidation and funding in the digital experience platform (“DXP”) space are going to make for a wild ride as the top players continue to unveil pieces of their strategy.  

  • Adobe - With Magento and Marketo neatly tucked up, Adobe remains the most competitive player both in terms of market share and the comprehensiveness of the offering, though cost and proprietary lock-in into a single homogenous platform are continued weaknesses. 

  • Acquia / Drupal - Recent acquisitions of platform components like Mautic and Cohesion are likely to continue or increase after the Vista deal in an effort to bring an open and more heterogeneous alternative to bear against the others. 

  • Sitecore - The recent acquisition of a top service provider, Hedgehog followed by the subsequent announcement that Sitecore was laying off 7% of its workforce can’t be interpreted as strong signs of health, but the enterprise market is full of Microsoft ecosystems that will be partial to Sitecore’s underlying technology. 

  • Automattic / WordPress - I have a less insight into the WordPress space than I do Drupal, but the SalesForce Ventures investment doesn’t feel like an attempt to gain a CMS for its own offering (sidenote: Salesforce does have a “CMS” and its Ventures has invested in other CMS’s like Contentful).  Founder Matt Mullenwig told TechCrunch that Automattic doesn’t want to change course. With the new influx of cash, there won’t be any big departure from the current lineup of products and services. “The roadmap is the same. I just think we might be able to do it in five years instead of 10,” Their recent acquisition of Tumblr is part of a strategy I don’t fully understand, but seems to be a continued volume market move into the larger media space and less about competing with the other platform providers. However, $300M could go a long way in tooling the platform for lots of purposes. 

I also think there is a lot more to watch on the related martech front surrounding customer data. In April, Salesforce and Adobe announced (in the same week) that they were acquiring competing Customer Data Platform (CDP) products. So this is about the whole digital experience stack; where we are likely going to see more acquisitions and consolidation is beyond the CMS. 

What Does This Mean For our Clients?

Despite the race to create the killer platform, most of our clients have consciously, or organically, adopted heterogenous digital experience platforms. This means they rely on many different components to “weave” together solutions that meets their unique requirements. As Forrester explains, DX is both a platform and a strategy and despite the influence of these major software and cloud players, a “digital experience” needs to be created - that includes strategy, customer research, UX, design, content, brand and the integration of custom and legacy software, and data sources in addition to purchased software. Still, we believe our customers do need to be aware of the changing dynamics in the market and in particular how consolidation will affect their platform investments. 

What Does This Mean For Phase2?

At Phase2, this news comes with much interest. We were one of the very first Acquia partners named after the company was founded in 2008. Over the last 10+ years, we have shared, and continue to share, numerous clients. We are also prolific contributors and implementers in the Drupal space who have been a part of some of the biggest and most impactful Drupal moments over the last ten years. We ourselves, once invested heavily in creating many products that extended and enhanced the capabilities of Drupal because we believe it is a powerful platform for creating digital experiences. 

Over time, as our agency grew and moved “up market”, we have diversified our expertise and have become Salesforce partners, developed Commerce experience, and enhanced our Design, UX and creative capabilities. We also use WordPress, Javascript frameworks for decoupled sites, and static site generators in conjunction with a wide variety of marketing technologies to create digital experience platforms that go beyond websites and CMS

We will continue to monitor the trends and prepare and enable ourselves to create digital experiences that advance our clients goals and we fully expect Drupal will remain a key component of building those experiences well into the future. 

Jul 23 2019
Jul 23

Here at Phase2, we’re excited to participate in the premiere Drupal government event, Drupal GovCon, held at the National Institutes of Health campus in Bethesda, Maryland where we'll once again be a platinum sponsor.

We hope you will join us for one of North America’s largest Drupal camps, this week.  

You can find Phase2 at our sponsor booth and all over the session schedule:

Why Migrate To Drupal 8 Now

With Drupal 9’s release impending, there has been a resurgence of chatter around Drupal 6/7 migration. If your government organization is still on Drupal 6 or 7 you will won’t want to miss this session. You can read more about the subject here from our on-site presenters, Tobby Hagler, Director of Engineering, and Felicia Haynes, Vice President of Accounts.

Measuring What Matters: Using Analytics To Inform Content Strategy

Content Strategy is the backbone to any successful digital experience. It’s also rarely considered an analytical process, but it should be! Catch our session to learn more, and in the meantime, read about the top 3 content strategy tips for government ahead of Jason Hamrick’s session.

Accessible Design: Empathy In A World With No Average

Our Accessibility expert, Catharine McNally, has a special treat for her session’s attendees: she’ll be hosting some interactive empathy building exercises, to showcase the impact of accessible design for citizens. This is a unique opportunity for anyone involved in the design, content, and UX of their organization’s/agency’s digital experience.

Personalization & Government: The Odd Couple or The Perfect Match?

Personalization has become the new standard, across industries, for improving customer experience, but how can personalization be leveraged in government? Join our CEO, Jeff Walpole, and Vice President of Business Development, Ben Coit,  to explore use cases and how governments can continue thinking about the next generation of citizen experiences.

Read more about scaling personalization in our recent whitepaper

Whether you’re joining us on the ground (in session) or from afar (via the airwaves), be sure to read our latest issue of Contributed Magazine—specifically covering open source, and secure, digital governments.

Jul 08 2019
Jul 08

For the past three years, developers, IT, and marketing professionals have gathered in NYC for “Decoupled Days, a growing but impactful event with influential speakers and change agents from across the tech space, to share and learn about decoupled architecture solutions for some of the world’s leading brands and organizations.

While the topic of decoupled architecture solutions may seem hyper-specific for a two-day conference, there is an aggressively (growing) interest and investment in decoupled architecture strategies across industries, all to enable the digital brand experiences they require to stay competitive.

Decoupling the front end of an organization’s platform from their back end allows for more flexible design capabilities and the ability to update the design much faster at the speed that marketing teams need to move. When the back end of your website is decoupled, you can easily integrate and swap micro-services in and out to provide the latest and most useful tools and functionality without a complete site redesign.  

Here at Phase2, we work with many of our enterprise clients on decoupling their digital architectures to meet their brand experience goals. We’re thrilled to participate in the organization, sponsorship, and thought leadership of this event. If you plan to attend this year’s event, (and we think you should!), be sure to check out some of Phase2’s front-end leaders as they share best practices and cutting-edge tooling in the following sessions:

We hope you’ll join us July 17th-18th in NYC to learn more about business, technology, and diversity when it comes to decoupled architectures for seamless omnichannel brand experiences that inspire audiences.

Jun 05 2019
Jun 05

The latest version of Drupal is version 8.7.2. You’re familiar of course. In fact, you’ve been on pins and needles ever since version 8.7.1(a). I’m sure you’ve been instasnaptweeting ever since it was issued.

OK, back to reality: You’re definitely more concerned with getting the data you need, driving great brand interactions, and maintaining costs (and keeping your InfoSec or IT teams happy)—than the latest CMS version.

However, what if I told you that your outdated Drupal 6 or 7 instance could actually hinder you from innovating, drain your budget, and potentially cause unnecessary security risks? And that once you get to Drupal 8, migration costs, security updates, and best-of-breed functionality (like content management and media libraries) from core Drupal contributions will help you deliver the audience experiences that grow your business.

So, should you migrate to Drupal 8 immediately? Yes.

Here’s why:

  • More languages? Not a problem: multilingual capabilities - It doesn’t take an engineering background to know that creating a multilingual site is not an easy task. Taking a step back, there are so many questions to consider:

    • Should content be displayed in a single native language?

    • What percentage of your site traffic is in English vs. other languages?

    • How should you handle media files?

    • Is there a risk if changes are incurred by translations?

    • Will the translations be handled properly? Who will handle translations?

The good news is that creating a multilingual site became a lot easier with Drupal 8 with benefits for both site admins and end users. Drupal 8 is now part of “core” (the basic features). In previous versions of Drupal, you needed to install extra modules (collection of files that contain some functionality)  just to support multilingual, which meant a lot more work, added costs and additional maintenance. Drupal 8 brings multilingual support to core with 4 modules, localized content (which makes usability and translations easier so that content translation can be possible for all types, including taxonomies, field types et al) and there are 100+ default languages included.

  • Ease of content editing - Building layouts has never been more intuitive than with Layout Builder! The tl;dr for Layout Builder allows fielded content on the back-end, but a true drag and drop front-end editing experience. There are even more added benefits including a flexible, admin interface, use of contextual links while keeping structured data. And you can now create layouts for templated content with Layout Builder. Many of Drupal’s competitors don’t allow such a templated approach to be done in browser.  (P.S. Learn more on this fantastic page-editing experience from my colleague, Caroline Casals here.)

  • Responsiveness for all - Responsive behavior is no longer a nice-to-have, but de rigueur.  This was an add-on in Drupal 7, but a built-in feature with Drupal core includes built-in themes that are mobile responsive. Additional web services now allow for content to be accessible from Alexa, Siri, and other virtual assistants.

  • Speed matters! - Drupal 8 has features that will make your websites faster without the need for a lot of technical experience. “Cache tags” are used making caching more efficient and also page loads faster.

  • More robust security - Drupal 8 is the stable version of Drupal; functionalities and major modules have been ported over and are now supported by the Drupal open-source community.

  • Integration friendly - Drupal 8 is set-up so that site administrators can easily use APIs to connect to a number of digital tools. Think: your deep integrations, like web analytic platforms, customer relationship management (CRM), email campaigns, social networks and marketing automation systems; you can rest easier that they’ll perform and communicate in concert.

So why else should you invest now? Drupal 7 is currently on life support.  As of November 2021, Drupal 7 will reach it’s “end of life” (EOL) which means that the version will no longer be supported by core maintainers.  

“No longer being supported” also means the following:

Drupal 9 is scheduled to be released next year (2020) which gives companies about a year to upgrade to Drupal 8. Don’t panic, but develop a reasonable plan for when you'll stop investing in your Drupal 7 platform, keep your site as up-to-date as you can (this will help security), and get on Drupal 8 as fast as you can. Still have questions? Give me a shout.

Jun 04 2019
Jun 04

Historically, migrating your content management system (CMS) or content platform from one major version of Drupal to the next was nothing short of a Herculean task.

Every new version of Drupal meant rebuilding existing functionality, converting (or migrating) your content, and accepting significant changes along the way. Because of this, it’s become commonplace to see stakeholders want to leapfrog Drupal versions (e.g., 5 to 7, or 6 to 8), to extend the life of both their old and new platforms for as long as possible for the least amount of transitional pain.

But this is no longer the case.

If you’ve been swimming in Lake Drupal for the last year, you’ve already heard the clarion call to prepare for Drupal 9, due to release in June of 2020. Most of the messaging has been tailored towards planning ahead for your migration, regardless of what version your CMS sits on today.

All too often, many of those plans call for waiting until Drupal 9 is ready before migrating away from your older platform… you know, to leapfrog ahead and save yourself the pain of CMS migration more frequently than you need.

And that’s only half the story.

If you’re not already on Drupal 8, the time to migrate is now. Drupal 9’s release within a year marks the end of life for Drupal 7. You could be incurring unnecessary security risks, all manner of technical debt, and a deluge of growing time-to-cost barriers — all while missing out on the most useful Drupal 8 features for marketers and developers like Layout Builder, an extensible media management system, and editorial workflow. Most importantly, Drupal 8 is essentially Drupal 9, so you’ll be ready for that upgrade when the time comes without another major effort.

Read: When the time comes, your Drupal 8 site can become Drupal 9 without migrating again.

TL;DR: Migrate Out of 6 or 7 Before 9 Drops

If you already know you need to migrate from Drupal 6 or Drupal 7, but you’re not sure where to begin, read this primer to help you scope, identify, and take steps to start the process.

If you’re still not convinced why you should already be migrating, here’s why it’s critical to begin migration ASAP.

If you’re on 6

  • Drupal 6 support has already ended (as of February 2016). If you’re still on this version, you’ve got unmitigated security risks that, if left unchecked, could cost your organization unforeseen resources or damages.

  • Migration from Drupal 6 will not be part of Drupal 9 core. It will be deprecated in Drupal 8.8 and removed in Drupal 9 to become a contrib module.

  • Core migration maintainers will manage the contrib module and support it for a while, but it will eventually become community owned. That leaves the future direction of that module unclear.

If you’re on 7

  • Support ends in 2021 (and that’s not too far off!). If you haven’t planned or budgeted for this, it’s time to start now, especially if your budget planning is a traditional calendar year

  • You might have painful memories of past Drupal migration. While previous versions were, in fact, incompatible across migrations, this is not the case from Drupal 7 to Drupal 8

  • This is the “last great migration.” Migrating from Drupal 8 to future versions (Drupal 9 and beyond) won’t require a similar migration effort. Upgrading from Drupal 8 to Drupal 9 will be more or less a drop-in-replacement as a version bump. Relatively speaking, there will be significantly less effort in upgrading from Drupal 8.9 to Drupal 9.0 than in previous core versions.

  • You don’t want to spend the next 12-16 months on an old system that you’re already wanting to upgrade. Just rip the band-aid off now. Drupal 9 is essentially a version of Drupal 8 — so the wait is needless, and will only be more complex and costly when making a two-version jump.

  • When Drupal 7 support is officially dropped, security updates will only be managed by third-party vendors.

[Note for developers: here’s how to navigate some of the more complex aspects of 8 on D7.]

[Note for marketers: even two years ago, we hosted a webinar on the benefits of Drupal 8 for marketers.]

But Drupal 8 Isn’t Drupal 9, Yet

On the day of release, Drupal 9.0 will be the same as Drupal 8.9, but with these significant differences:

  • External dependencies (such as Symfony) will be upgraded to the latest major version.

  • Deprecated code in Drupal 8 will be removed, potentially breaking custom or contrib modules still using that code.

Otherwise, there will be no significant core API breaking changes (as in previous major version upgrades), maintaining all the same core features and functionality. This means that you can effectively upgrade from Drupal 8 to Drupal 9 without overhauling your CMS or migrating content, simply by updating a point release.

Migrating to Drupal 8 now will prepare you for Drupal 9, but you will still need a Drupal 9 readiness plan. Knowing what is being deprecated and how that affects your CMS is essential to your roadmap. However, with a readiness plan in hand, your Drupal 8/9 platform’s lifespan will be significantly increased.

Bottom-line: The thing that you’re waiting for is already here. Don’t cling to an old system that doesn’t meet your needs anymore — and is potentially costing you more time, money, and increasing your security risk. Read Issue #1 of Contributed Magazine, written with marketers and developers (code included) in mind, so they can get the most out of Drupal 8 and it’s soon-to-be baby brother, Drupal 9.

Resource Bank:

May 13 2019
May 13

Digital experience platforms (DXPs) had a big moment this past week.

With Wednesday’s simultaneous announcements, Acquia acquired the open source marketing automation platform Mautic and rival Sitecore acquired the digital consultancy Hedgehog .

Phase2 squarely operates in the business of implementing and integrating DXPs—so naturally I have thoughts on these announcements.

While there is a huge ecosystem of products and providers in this space, there are really three standard platforms that the majority of large-to-medium sized enterprises are centered on: Adobe, Sitecore, and Drupal/Acquia. And while we may be accustomed to the non-organic “growth of the Adobe platform via acquisition”, we haven’t seen many major acquisitions among contender platforms.

So what is driving this consolidation?

What does that signal for the market?

And what does it mean for our clients and our company?

Acquia Acquires Mautic

Over the last few years, Acquia has begun an aggressive strategy to compete with the DXP product offerings of Adobe and Sitecore because digital experiences are not built on Content Management Systems (CMSs) alone. Sophisticated clients, that are in touch with customer needs, also have an overwhelming need for marketing technology that can drive results beyond building, hosting, and powering websites and content. That includes key components like commerce, CRM, and “martech” including: personalization, journey orchestration, marketing automation, and customer data platforms (CDPs).

Phase2 and Acquia share a perspective that the best way to serve those clients (and their wide range of design, technology, and brand needs) is through digital experiences based on open technology viz. open source, and an open API architecture that allows for greater integration, portability, and customization.

In addition to a cloud-based hosting platform and support for Drupal, Acquia offers a variety of products focused on completing the DXP including a personalization engine, a customer journey orchestration tool, and a digital asset manager. But this leaves several components of a complete DXP unaddressed; and traditionally, Acquia customers filled those gaps with alternative, best-of-breed solutions. Most notably absent was marketing automation. Enter Mautic.

Mautic was a necessary acquisition target for Acquia because:

  • Building a stronger connection between Mautic, Drupal, and the Acquia platform should provide superior integration of customer data over using Marketo, Eloqua, Pardot, or Hubspot.

  • Today, Mautic is the only viable open source product that could fill this gap and should appeal to customers who prefer open alternatives to proprietary software.

  • It’s a (not-so-subtle) counter to Adobe’s acquisition of Marketo which could be an entry point into the Adobe Marketing Cloud for shared customers.

Since it’s open source, it’s particularly appealing to many Acquia customers already on Drupal. But also, there’s a natural fit from the perspective of Dries Buytaert, Acquia CTO and Chairman (and the creator and leader of the Drupal project), who envisions the Acquia platform as an “open” alternative to both Sitecore and Adobe—an important option in the market which secures the credible statement that they are building the “First-Ever Open Marketing Cloud”.  

This acquisition also continues the trend I highlighted in an October post, when IBM acquired RedHat—where I argued that open source’s time in the enterprise has come-of-age and consolidation and adoption of open source software was the “new normal”.

Sitecore Acquires Hedgehog

Meanwhile, the move by Sitecore to acquire one of their leading services partners is unusual. And it could indicate that along with expanding revenue, control over the implementation of their software is an objective.

Ultimately, I suspect this move is going to enable them to control more successful implementations, but potentially at the cost of the loyalty and health of their partner ecosystem. It could be a step into the slippery-sloped-world of “channel conflict”.

Look no further than giants like Oracle and IBM that have long struggled with this balance. They believe they need a services component to ensure successful adoption and guide tricky and leading implementations, but they inevitably face backlash from both partners and customers who want a healthy and flourishing ecosystem of implementation partners to choose from.

In my opinion, Acquia’s pursuit of product over services is a healthier direction for all players in the ecosystem. While they have more components still to develop or acquire in their pursuit of a complete DXP offering, the open nature of the platform and the driving direction from its open source roots allows customers to componentize a heterogeneous DXP that has already become the standard.

Upon taking the CEO position at Acquia in December 2017, Mike Sullivan made a very important statement to the Acquia partner community, the Drupal community, and Acquia’s investors when they redirected the professional services (PS) team at Acquia to staff-up the product engineering team. The company has pledged to limit the growth of their PS team to a defined percentage of revenue to ensure ample opportunity for partners and focus their PS efforts on enablement, unique challenges, and high priority adoption projects.

In fairness, Acquia did have a larger PS group than Sitecore. But, using some very back-of-the-napkin calculations on both companies, I would conclude that this move likely puts Sitecore’s total PS size above that of Acquia’s. It will matter, of course, how and where they deploy this team—in terms of implementations vs. training/enablement.

What’s Next?

It was just over a month ago, that in similar fashion, Salesforce and Adobe announced (also in the same week) that they were acquiring competing Customer Data Platform (CDP) products. So this is all still far from over; we are going to see more acquisitions and consolidation.

We may also see the DXP solution components themselves like CMS, CRM, DAM, commerce, personalization, marketing automation, CDP, MDM and campaign management merging into more consolidated categories of software. This is increasingly likely in light of the fact that key industry players like Adobe will want to blend these distinctions to create platform stickiness. It is easier to sell a “Marketing Cloud” than independent types of software.

Personally, I believe that while consolidation of the DXP market will help with customer confusion, there will be (and should be) a place for heterogeneous, de-coupled DXPs that use an API-first architecture to combine different components. These will give customers greater choice, “best-of-breed” functionality, lower (or more controllable) costs, and more opportunity to cater to the uniqueness of each digital experience.

And finally, despite the power of software as the building blocks for digital experiences, the real key to success is how the true experiences are brought to life—beyond underlying platforms. In other words, success of complex digital implementations remains dependent on the services necessary to plan, execute, integrate, design, and polish the user experience of the product. And services companies—like ours—will continue to play a key role in guiding the success of implementations and experiences themselves.   

Apr 19 2019
Apr 19

We snagged this photo on our second day in the pacific northwest.

After Uhaul-ing from Phase2’s office in Portland, the marketing team paused at a statue of Seattle’s native son, Ivar Haglund (somewhat aghast at the size of these gulls (but more aghast at their true-to-life scale)), getting settled in for DrupalCon 2019.

We didn’t know it at that moment, but these large birds (read: dinosaurs) would be harbingers of the conference’s largest year, ever.

Between keynotes, sessions & summits, booth-happenings, team bonding and client bonding, here are our laconic takeaways from an Ivar-feeding-the-gulls-sized-DrupalCon:

THEMED

  • Phase2’s 2019 DrupalCon theme was Engineering The Invisible.

  • The theme was a nod to the—often invisible—work accomplished through code, but also the complex machinations that it takes to deliver seamless brand experiences for audiences--whether they’re citizens, customers, fans, or patients.

  • And, through new creative work, we demonstrated different takes on “engineering” as a modern term for creating compelling stories, future-forward technologies, context, interactions, and transactions—from digital clipboards in healthcare to advanced machine learning for help centers—we’re meeting people at the moments that matter most to them, in valuable and dynamic ways.

  • This was brought to life through the (now eponymous) radiance orb by Light at Play and a custom Twitter API that allowed attendees to directly interact via conference-related hashtags, retweets, and a touchpad (tying physical booth and digital experiences together):

    • Built using open source technology

    • 4’ geodesic sphere comprised of 320 edge-lit light transmissive acrylic panels

    • Containing ~3,000 individually-addressable LEDs

    • Governed by three internal computers

    • Featured at UNESCO in Paris in 2015 as part of the Interactional Year of Light, and at the National Academy of Science

RECOGNIZED

LAUNCHED

MC’D

  • Our intrepid Vice President of Product and Strategy, Kellye Rogers, led the first-ever #DrupalHealth (Healthcare) summit.

  • Keynote was delivered by Emily Kagan Trenchard of Northwell Health, with a profoundly impactful speech on the culture of tech for patients, caregivers, doctors, donors, and systems, "Science is the original open source project; and healthcare is the ultimate platform."

  • Our Accessibility (#A11y) Lead, Catharine McNally, received a standing ovation for sharing her story and an Empathy Training session that had attendees in slings and modified glasses--illustrating how quickly a change in capabilities can affect usage—and subsequently your interactions with the modern, digital world.

DEMO’D

PRESENTED

We’re definitely looking forward to DrupalCon 2020 in Minneapolis; keep in touch with us on Twitter or LinkedIn in the meantime.

Apr 17 2019
Apr 17

We’ve written a lot about content migration on our blog here—it’s something we have more than a passing interest in, because we do it a lot! The posts below cover the project management, estimation, and basics of content migration from Drupal to Drupal, and other sources too.

If you’ve been following along with this series, you will have a lot of good information at your fingertips. (If you haven’t, I highly recommend you do so now. We’re building on their foundation here.)

If you’ve tried to implement the code samples, you might even have a functional migration!

But what if you don’t? What if you’re getting errors, or unexpected results, or just… nothing at all? Well, that’s what this post is all about. What do you do, when what you did isn’t doing what you thought it was going to do?

Here are some tips, tricks, and starters for figuring out what went wrong.

Migrate Message / Map Tables

Drupal 8’s migrate system is very responsible. When you create and first run a migration, it adds two new tables to your database: migrate_map_[migration_id] and migrate_message_[migration_id].

The migrate_map_ tables are comparison tables. They store three items of interest:

The purpose of storing this information is two-fold. First, it allows the MigrationLookup plugin to do its thing, associating old references to migrated content at the new ID. Second, and more relevant for this post, the Status field is a handy reference for you to quickly determine if a given item of content successfully made the transition to the new system. Imported is 0, Ignored is 2, Failed is 3.

What’s the difference between ignored and failed? Ignored means that something in your migration told Drupal to skip that row. Usually, that’s the SkipOnEmpty or SkipRowIfNotSet plugins; some other process plugins will call SkipOnEmpty, as part of their own working. This is usually intentional and not a reason to worry.

Failed means that Drupal tried, and it didn’t work. This is often because a necessary field isn’t set, or because there was a PHP error or other problems along the way.

Failures are where the migrate_message_ tables come in. In most cases, when Drupal records a failed migration row, it will also provide a message giving you a clue as to why that happened. The two tables are cross-referenced by source_ids_hash. In the event of a failed migration, migrate_message is the first stop in your diagnosis.

For example, a common one when dealing with files is

File public://example_file.pdf' does not exist

Obviously, this is a simple fix—either replace the file, or fix the file path in the source data. Look at what Drupal’s telling you here, and see if it’s something you can easily fix. If nothing jumps out at you, keep reading.

Power Cycle Script

Why isn’t the change you made in your migration.yml file working? Well, Drupal probably doesn’t know about it. By default, Drupal imports a module’s configuration files only when the module is initially enabled. The ‘active’ configuration lives in the database, so changes you make to your migration module’s YML files are not registered by Drupal.

In order to overcome this, you have to run a configuration import in drush. It looks like this:

drush cim --partial --source=modules/custom/example_migration/config/install/

Of course, typing that out every time you make a small change in your config is a hassle, so script it! A good migration script will stop the migration, reset it, reimport the configuration, roll the data import back, and run it again. Having a script like that will save you the hassle of typing the same five drush commands over and over, and possibly forgetting to import config or reset your migration. You can find ours in our D8 Examples repository. Run it from the command line and relax, secure in the knowledge that your scripts are using the most current configuration.1

Adding Line Numbers in YML Files

This tip is a little obscure, but useful. When you are migrating from an XML source, there are a whole bunch of things you have to specify in your source config. Notably, you have to specify the fields you will be using from the XML. The details of how this works are spelled out in a previous blog post, Migrating to Drupal From Alternate Sources, linked at the start of this article.

example_xml_migrate/config/install/migrate_plus.migration.example_xml_articles.yml


  1. id: example_xml_articles

  2. label: 'Import articles'

  3. status: true

  4. source:

  5. plugin: url

  6. data_fetcher_plugin: http

  7. urls: 'https://www.phase2technology.com/ideas/rss.xml'

  8. data_parser_plugin: simple_xml

  9. item_selector: /rss/channel/item

  10. fields:

  11. -

  12. name: guid

  13. label: GUID

  14. selector: guid

  15. -

  16. name: title

  17. label: Title

  18. selector: title

  19. -

  20. name: pub_date

  21. label: 'Publication date'

  22. selector: pubDate


The migrate_plus module gives us the ability to create migration groups, which allow us to consolidate configuration. That’s covered in detail in our blog post Drupal 8 Migrations: Taxonomy and Nodes, also linked above.

These two ideas combine together pretty nicely: You can in fact call out common fields in your XML by putting them in a migration group, and then use those fields in multiple migrations.

example_xml_migrate/config/install/migrate_plus.migration_group.example_xml_group.yml


  1. id: example_xml_group

  2. label: General Content Imports

  3. description: Common configuration for node migrations from XML.

  4. source_type: XML File

  5. shared_configuration:

  6. source:

  7. plugin: url

  8. data_fetcher_plugin: http

  9. urls: 'https://www.phase2technology.com/ideas/rss.xml'

  10. data_parser_plugin: simple_xml

  11. item_selector: /rss/channel/item

  12. fields:

  13. 1:

  14. name: guid

  15. label: GUID

  16. selector: guid

  17. 2:

  18. name: title

  19. label: Title

  20. selector: title

  21. 3:

  22. name: pub_date

  23. label: 'Publication date'

  24. selector: pubDate


However—there’s a catch. Note that, unlike any other YML file shown in this series so far, the fields here have numerical array keys. This is because, if you specify a fields section in your individual migrations as well, and both arrays are keyed with the normal -, the migration fields section will completely override the group fields section. Keying them by number in both YML files allows them to be additive. Just make sure that your group and migration keys don’t collide.

Weirdly, the process section of migration config seems to be additive already; you can specify process plugins in both the group and the migration. The migration will only override duplicates on the field level, not the whole shebang.

XDebug and figuring out failures/error messages.

This is the big one, the lynchpin of debugging a migration. I’m not going to tell you how to set up XDebug with your IDE and dev environment. Let’s face it, there are a berjillion different dev environments, and every one of them is set up differently. Fortunately, there seem to be two berjillion tutorials on making XDebug work with whatever your dev setup is. So, go figure out how to make that part of things work, then you can start looking for issues in the code.2

OK, got that part? Good.

A good IDE will allow you to set “breakpoints”. Wikipedia defines them thus:

...a breakpoint is an intentional stopping or pausing place in a program, put in place for debugging purposes.

When XDebug is enabled, and the IDE is ‘listening’ to your server, the execution of the code will halt at the breakpoint(s), and you should get some tools to examine the state of variables in that moment. In this case, we’ll be setting breakpoints in a few key files in the migration process.

Migrate Executable

First up, the main executable file in migrate module: core/modules/migrate/src/MigrateExecutable.php. This class file has the massively important import() method.

The import() method is the spider in the center of the web of a migration. It calls tons of other methods as it checks requirements, gets the source data, gets the destination configuration, and loops through the data to create & save new content in the target environment.3

  • Line 184 calls getSource(). This method attempts to retrieve the data you’re planning on migrating. Setting a breakpoint here will allow you to dig into the retrieval process.

    • The getSource() method invokes the source plugin you’ve specified in your migration and migration_group YML file. See below for more detail on Source Plugins.

    • If Drupal gets through to line 197 without throwing an exception, and $source has a value, then Drupal is (probably) successfully retrieving the data. You can use your IDE to examine the validity of $source, just to be sure.

  • Line 198 is the start of where you’re most likely to encounter errors. The while loop defined here cycles through all the data from the source, and runs processRow() on each $row of data (Line 203). This method calls all the process plugins defined in your migration and migration groups.

    • When you use your IDE to step through processRow(), you will quickly find yourself in the individual process plugins (Line 368). These are the code files that do actual data manipulation. There’s a bunch defined in core, more in migrate_plus, and you can also create your own. If you determine that an issue is happening in one of those specifically, you should probably just put a breakpoint there; it’ll save you a lot of clicking.

  • Finally, line 226 calls $destination->import(), which is where the data is actually saved to the destination environment. Usually, if you’ve gotten to this point, saving is smooth sailing, but if the problems aren’t occurring in the process section, this is a likely next bet.

MigrateExecutable.php also defines a bunch of exception error message. This is the source of many of the messages seen in migrate_message_ tables, as well as command line errors when running migrations with Drush. The messages can also be a good way to figure out where to set breakpoints - track down the error message, then backtrack to the try statement that’s associated with that exception’s catch.

When you’re searching through the code, bear in mind that Drupal does a lot of string substitution in error messages, like so: 'Migration @id did not meet the requirements. @message @requirements'. Make sure you edit your search terms to exclude things that are specific to your situation, like the migration @id.

Source Plugins

Drupal core defines a lot of source plugins—one for pretty much every entity type present in Drupal 6 and 7, in fact, plus revisions and translations. And, confusingly, these files are not stored in the migrate or migrate_drupal modules. Source plugins are stored in the folder of the module that defines the entity type. For example, the source plugins for node entities are at drupal/core/modules/node/src/Plugin/migrate/source.

Fortunately, Drupal 8 is object oriented, which means that each of these plugins will have the same base set of methods in them. The most important is prepareRow().

  • The prepareRow() method is responsible for loading and preparing each row of data for the migration. For example, the D7 nodes plugin gets the baseline node values and then adds in any Field API data associated with that node.

  • Every Source plugin will have this method. In the end, they are all responsible for returning an array of objects in a uniform format that the process plugins will understand. How they do this will, of course vary, based on the type of source data, but their output at the end should be effectively identical.

  • Generally speaking, if you are having issues with your source data being weird, the problem probably isn’t in the source plugin. It’s more likely that you are somehow specifying things incorrectly in your migration or migration_group YML file.

The migrate_plus module also provides a URL source plugin, which is used for XML, JSON, and RSS imports. It’s substantially more abstracted than the core DB-based entity plugins. In addition to the URL source, it makes use of data fetchers, which grab the data from either a file or an http request, and data parsers, which are responsible for reading and understanding the format of the data. They do not directly invoke the prepareRow() method; instead, your debugging will likely need to poke into the data parsers.

Destination Plugins

The code that does the work of formatting and saving entities is substantially more abstracted than the source plugins, because all entity types in Drupal 8 are structurally the same. This particular functionality is pretty battle-hardened, so it’s unlikely that your issues will be here. That said, if you do have need of debugging it, start with drupal/core/modules/migrate/src/Plugin/migrate/destination/Entity.php.

Still Not Working? Have Additional Ideas?

Migration’s a pretty involved process, with a lot of moving parts. If you’ve tried all of this, and nothing is making it any better, well, it might be time to seek help. The #migration channel on Drupal Slack is a great place to start. I can be found there as @srjosh.

If you’ve discovered a legitimate bug in the core migrate code, you’ll want to peruse the issue queue. The migrate_plus module has its own issue queue, as well.

If you’re interested in the latest on the efforts to stabilize and improve migrate in Drupal, the Migration Initiative is the group responsible. They can be found at @MigrateDrupal.

If you have a tip, trick, or snippet that just plain makes your migration life easier, please drop it in the comments below. Happy migrating!

  1. It is also possible to store the migration yml files in module/migrations, instead of module/config/install. Additionally, the naming convention is simplified—it's just migration_id.yml, instead of migrate_plus.migration.migration_id.yml. This allows migration configurations to be reimported with only a cache clear, instead of running a config import. However, migrate groups from the migrate_plus module have not caught up with this, meaning that you still have to put them in module/config/install, and import them with a config import. Which you choose is your call, but there’s a lot of value in having a consistent workflow for both migrations and migration groups. ↩︎
  2. Here at Phase2, we’ve standardized on using Docksal for our dev setups, and a lot of us use PHPStorm. The Docskal docs for integrating the two are great. ↩︎
  3. Please note that all line numbers reference Drupal version 8.6.x; line numbers from other versions can and will vary. ↩︎
Apr 07 2019
Apr 07

Today, personalizing your customer journey is an essential strategy and part of delivering a competitive brand experience. And while some organizations have implemented a range of personalization tools, many have not successfully scaled their personalization strategy across their customers’ journey at every touchpoint.

When we throw commerce into the personalization equation, we get another layer of nuance. While retail has always been at the forefront of customer experience innovations, digital commerce tools have rarely provided brands with the necessary flexibility and scalability for a consistent, branded, purchasing experience across devices, channels, and touchpoints.

Leading up to Drupalcon Seattle, our team got together to build a demo that showcases how open technology and API-driven tools can plug into your existing digital experience platform—so brands can truly personalize their purchase experiences across the customer journey.

To build this demo we used Drupal as the content management base, integrated Acquia’s Journey and Lift tools for personalization and journey orchestration, and layered in Elasticpath Commerce, a flexible, open tool designed to deliver a unified selling experience.

Through an automated and personalized 1:1 customer journey, the demo uses multiple channels and touchpoints, (Twitter, website, email), in addition to a discount code and finally a purchase of some merchandise. (Demo attendees get the merchandise, free, at the end.)

As the user interacts with more touchpoints, we are able to build a robust customer profile based on behavior and preference, and deliver more targeted and meaningful brand interactions across the whole customer journey.

If you are attending Drupalcon, be sure to stop by Booth 201 to see our demo. If you won’t be attending and are interested in seeing the demo and learning more about personalized commerce experiences. Contact us!

Apr 04 2019
Apr 04

Stop me if you’ve heard this one before.

You’re working on a big, time-sensitive update to a page. Maybe it’s a landing page, or a home page. Whatever it is, stakeholders are involved, and Big Names are giving input. You’ve got 15 versions of this in draft. Your workflows are performing perfectly and launch is ten days away.

Then, disaster strikes.

Someone has noticed a typo—on the published page. You can’t unsee it. Big Names get notified and suddenly the new page isn’t what everyone is talking about. There’s that glaringly obvious typo and it

must

be

fixed.

If you’ve been here before, you know Drupal just isn’t your friend right now. You can’t edit the published version of the page without playing some truly intense revisions hopscotch.

Editors of yesteryear would have played that hopscotch at 10PM hoping no site visitors noticed. That was not a happy thing. That was darkness and despair. Today though, you’ve got options. Today you can confidently stride into typo-aggedon and with bold swagger, place your hand on Big-Name’s shoulder, and loudly proclaim:

“I’ve got this. Hold My Draft.”

Introducing Hold My Draft

We’re happy to announce Hold My Draft, our new workflow module that does one very important thing for typo-aggedons across the Drupal-verse: giving you the ability to edit a published revision when you have forward drafts. Granted, it’s a rather specific problem, but it’s a specific problem you don’t have to worry about anymore.

Here’s how the draft-hold process works:

  • To kick off a draft-hold on a specific page, head to the Revisions tab.

  • Once there, if you have a forward revision ahead of the published one, you’ll be able to “clone and edit”.

  • Hold My Draft will track (hold) the latest draft of the page the instant you do this.

  • You can then make any changes or as many revisions as you’d like to the published page. When you’re done, you can complete or cancel the draft-hold.

Completing a draft-hold will triumphantly place your draft back on top of the revisions pile. Your held draft has been handed back to you after the crisis has been averted. Or you can cancel the draft-hold, leaving the once-held draft behind in the dustbin of the revisions archive.

This module is designed to step lightly on the core revisioning system—there’s never a destructive process so you don’t need to worry about data loss. Hold My Draft simply notes the revision ID of your draft for retrieval later and uses core revision reversion.

No odd revision states.

No custom revision hijinks.

No restrictions on functionality while a draft is held.

[Note: this module requires the Core Content Moderation module but it will work on content that is not moderated. Any revisioned node with forward revisions can utilize Hold My Draft.]

This is an alpha release and at present only works for nodes. We have some more features we’d like to put in, like at-a-glance reporting on in-progress draft-holds, but we’re also excited to share Hold My Draft as is. Got a feature request? Hit us up in the issue queue!

We’re excited to share this module in the hope that we all have one less thing to stress about in this big crazy world of digital experiences. May you never need to play revision hopscotch again; may you always be able to fix that published page; and may Big Names everywhere sleep a little easier. Thanks all, and good luck out there.

PS, like our admin theme? It’s the new Claro admin theme, the refresh of Seven that’s in progress.

Mar 25 2019
Mar 25

Our annual pilgrimage to Drupalcon is just two weeks away and we are very excited for another unforgettable conference filled with community momentum, thought leadership, and partnership. With several sessions, summits, tracks, and booths to visit this year, I thought I would highlight some key places to find some of Phase2's finest at the conference!

Healthcare Summit

We are thrilled to be participating in the first ever Healthcare summit at Drupalcon! As the healthcare industry is undergoing rapid digital transformation, this is an exciting time to see what leading organizations are building when it comes to seamless patient experiences on- and off-line. Emily Kagan Trechard, Northwell Health’s VP, Digital Innovation, will be presenting the keynote, and our own Kellye Rogers, Phase2’s VP of Product and Strategy will be MC-ing the event.

Configuration Management Initiative 2.0 updates

Our very own Drupal treasure Mike Potter will be joining Fabian Bircher to break down the latest Configuration Management initiative and what it means for your site. A session not to be missed!

Introducing The Migrate QA Module

In this session, Senior Developer David Lanier, will be introducing the module he recently contributed called Migrate QA. He will review how the module helps content admin migrate to Drupal 8 and how developers can easily integrate this tool for a smooth migration.

Custom Compound Fields in Drupal 8

Tobby Hagler, infamous for past Drupalcon sessions like Dungeons & Dragons, & Drupal, will be presenting this year on Compound Fields in Drupal 8 and how they can be used with Media and Layout Builder. Check out this session if you are interested in learning how to deal with complex data models in Drupal!

Future-Proofing Your Agency

Long-time Drupal board member and Phase2 CEO Jeff Walpole will be leading a session on agency leadership. From his experience as a founding member (and our CEO) of Phase2 since 2001, Jeff will discuss culture, strategic positioning, sales, and financial health and security.

Don’t Forget To Visit at Our Exhibitor Booth #201!

If you are attending Drupalcon this year, be sure to stop by our booth! Our team is excited to share and learn from the community. We will be hosting Demo’s of some of the hottest Drupal and personalization technologies out there, handing out our legendary Phase2 swag, and unveiling our interactive open source art installation. If you can’t attend this year, be sure to follow along on Twitter: @Phase2.

Mar 18 2019
Mar 18

The Rundown

At Phase2 we’re always looking to pinpoint the real problem and solve it. Let’s say we have a new project to implement a design system for The First Order. We’ve done work for their parent organization in the past and already have a design system in place for The Empire. The site architecture calls for creating a multi-site and multi-design implementation to make use of The Empire’s assets for The First Order.

Particle is Phase2’s approach to Front-End.

It’s a huge starter kit for kick-starting projects with solid JavaScript best practices and ships with apps for Pattern Lab as well as Drupal and Grav themes. Particle allows us to have a jumping off point to get to solving client problems quickly and is designed to work out-of-the-box with a combination of PHP+JavaScript with Twig Templating.

Twig is a powerful PHP templating language, and while we could load all components from relative directories, that doesn’t make sense in terms of code maintainability and extensibility. Writing our new First Order design system to reference the Empire with a series of relative paths would get unwieldy very quickly. Fortunately, Twig allows the use of Twig Namespaces to address this issue. A Twig Namespace is simply a shortcut to a template directory.

In Drupal, to achieve this functionality, we use the Component Library module, while in Grav we use the Twig Namespaces plugin. This allows us to register our Twig Namespaces within the context of the individual app implementation while using Particle’s JavaScript to determine what these paths are.

At Phase2 we really love Brad Frost’s Atomic Design principles and Particle uses these to organize our components out-of-the-box. Pattern Lab is the default app we use to demonstrate these atomic components. Particle’s configuration allows us to have a classic Peanut Butter + Jelly relationship with our components automatically being registered and consumed by our apps using atoms, molecules and organisms as our Twig Namespaced paths. Great! Now we can get to work solving The First Order’s problems.

The Problem

Not so fast though! Twig Namespaces have to be unique. So what happens when you need to provide a multi-design solution? Consider our use case where we have a multi-site platform that requires similar, but slightly different, components in your design?

The First Order really wants to use elements established in The Empire design but they have their own requirements (as well as a few very vocal stakeholders).

In Drupal, we chose a multi-theme build with “The Empire” parent theme and “The First Order” subtheme. Each theme now registers their own atoms, molecules and organisms in empire.theme and first_order.theme. However we’ve hit a snag!

Since Twig Namespaces are unique, the last-in-line namespace will overwrite the previous namespace in the Component Library module. Because the First Order theme includes their own atomic components (atoms, molecules and organisms), these Twig Namespaces override the atomic components declared in the parent theme. Say goodbye to The Empire’s precious and much-used organisms/card--vader.twig variant.

While there are a few ways to address this problem, the root cause is the Twig Namespaces themselves. In a multi-design scenario adhering to Atomic Design principles, it might be better to prefix the Twig Namespaces with our app name.

Introducing Namespace Prefixing in Component Library

The alpha2 build of the Component Library module now includes the option to prefix a namespace. Addressing this in the Component Library module fixes the issue and provides a solution for The First Order and the community. To try the alpha version out, you can download the Component Library module into your module folder either by cloning the repository or installing via Composer:

Git

git clone --branch 8.x-1.1-alpha2 https://git.drupal.org/project/components.git

Composer

composer require 'drupal/components:^1.1'

This version of the module provides a default config for Namespace Prefixing (off by default). When enabled, the Twig Namespace is prefixed with the theme or module name. In our theme example above, Drupal now sees our Twig Namespaces as empire_organisms/card--vader.twig. Great! Now The First Order and the Empire can easily share design assets but have their own implementations!

To enable this you can either do so via config (update your sites components.settings.yml) or visit yoursite.com/admin/config/components/settings to enable via the admin interface. We have intentionally left this off the configuration options for the module as it can easily break your site without updating your theme.

Enabling this configuration should be done along with updating your theme’s invoked Twig Namespaces in templates. For example, if your Drupal theme’s page.html.twig contains {% include @organisms/card.twig  %}, you will need to update it to {% include @theme-name_organisms/card.twig  %} or else Drupal will no longer interpret the updated Twig Namespace path correctly.

The development roadmap for Particle has multi-design support next in line and we’re very excited to bring these solutions to a galaxy slightly closer to home.

Nov 01 2018
Nov 01

Content migration is a topic with a lot of facets. We’ve already covered some important migration information on our blog:

So far, readers of this series will have gotten lots of good process information, and learned how to move a Drupal 6 or 7 site into Drupal 8. This post, though, will cover what you do when your content is in some other data framework. If you haven’t read through the previous installments, I highly recommend you do so. We’ll be building on some of those concepts here.

Content Type Translation

One of the first steps of a Drupal to Drupal migration is setting up the content types in the destination site. But what do you do if you are moving to Drupal from another system? Well, you will need to do a little extra analysis in your discovery phase, but it’s very doable.

Most content management systems have at least some structure that is similar to Drupal’s node types, as well as a tag/classification/category system that is analogous to Drupal’s taxonomy. And it’s almost certain to have some sort of user account. So, the first part of your job is to figure out how all that works.

Is there only one ‘content type’, which is differentiated by some sort of tag (“Blog Post”, “Product Page”, etc.)? Well, then, each of those might be a different content type in Drupal. Are Editors and Writers stored in two different database tables? Well, you probably just discovered two different user roles, and will be putting both user types into Drupal users, but with different roles. Does your source site allow comments? That maps pretty closely to Drupal comments, but make sure that you actually want to migrate them before putting in the work! Drupal 8 Content Migration: A Guide For Marketers, one of the early posts in this series, can help you make that decision.

Most CMS systems will also have a set of meta-data that is pretty similar to Drupal’s: created, changed, author, status and so on. You should give some thought to how you will map those fields across as well. Note that author is often a reference to users, so you’ll need to consider migration order as well.

If your source data is not in a content management system (or you don’t have access to it), you may have to dig into the database directly. If you have received some or all of your content in the XML, CSV, or other text-type formats, you may just have to open the files and read them to see what you are working with.

In short, your job here will be to distill the non-Drupal conventions of your source site into a set of Drupal-compatible entity types, and then build them.

Migration from CSV

CSV is an acronym for “Comma-Separated Value”, and is a file format often used for transferring data in large quantity. If you get some of your data from a client in a spreadsheet, it’s wise to export it to CSV. This format strips all the MS Office or Google Sheets gobbledygook, and just gives you a straight block of data.

Currently, migrations of CSV files into Drupal use the Migrate Source CSV module. However, this module is being moved into core and deprecated. Check the Bring migrate_source_csv to core issue to see what the status on that is, and adjust this information accordingly.

The Migrate Source CSV module has a great example and some good documentation, so I’ll just touch on the highlights here.

First, know that CSV isn’t super-well structured, so each entity type will need to be a separate file. If you have a spreadsheet with multiple tabs, you will need to export each separately, as well.

Second, connecting to it is somewhat different than connecting to a Drupal database. Let’s take a look at the data and source configuration from the default example linked above.

migrate_source_csv/tests/modules/migrate_source_csv_test/artifacts/people.csv




  1. id,first_name,last_name,email,country,ip_address,date_of_birth

  2. 1,Justin,Dean,jdean0@example.com,Indonesia,60.242.130.40,01/05/1955

  3. 2,Joan,Jordan,jjordan1@example.com,Thailand,137.230.209.171,10/14/1958

  4. 3,William,Ray,wray2@example.com,Germany,4.75.251.71,08/13/1962


migrate_source_csv/tests/modules/migrate_source_csv_test/config/install/migrate_plus.migration.migrate_csv.yml (Abbreviated)




  1. ...

  2. source:

  3.   plugin: csv

  4.   path: /artifacts/people.csv

  5.   keys:

  6.     - id

  7.   header_row_count: 1

  8.   column_names:

  9.     -

  10.       id: Identifier

  11.     -

  12.       first_name: 'First Name'

  13.     -

  14.       last_name: 'Last Name'

  15.     -

  16.       email: 'Email Address'

  17.     -

  18.       country: Country

  19.     -

  20.       ip_address: 'IP Address'

  21.     -

  22.       date_of_birth: 'Date of Birth'

  23. ...


Note first that this migration is using plugin: csv, instead of the d7_node or d7_taxonomy_term that we’ve seen previously. This plugin is in the Migrate Source CSV module, and handles reading the data from the CSV file.

  path: /artifacts/people.csv

The path config, as you can probably imagine, is the path to the file you’re migrating.  In this case, the file is contained within the module itself.




  1. keys:

  2. - id


The keys config is an array of columns that are the unique id of the data.




  1. header_row_count: 1

  2. column_names:

  3. -

  4. id: Identifier

  5. -

  6. first_name: 'First Name'

  7. -

  8. last_name: 'Last Name'

  9. ...


These two configurations interact in an interesting way. If your data has a row of headers at the top, you will need to let Drupal know about it by setting a header_row_count. When you do that, Drupal will parse the header row into field ids, then move the file to the next line for actual data parsing.

However, if you set the column_names configuration, Drupal will override the field ids created when it parsed the header row. By passing only select field ids, you can skip fields entirely without having to edit the actual data. It also allows you to specify a human-readable field name for the column of data, which can be handy for your reference, or if you’re using Drupal Migrate’s admin interface.

You really should set at least one of these for each CSV migration.

The process configuration will treat these field ids exactly the same as a Drupal fieldname.

Process and Destination configuration for CSV files are pretty much the same as with a Drupal-to-Drupal import, and they are run with Drush exactly the same.

Migration from XML/RSS

XML’s a common data storage format, that presents data in a tagged format. Many content management systems or databases have an ‘export as xml’ option. One advantage XML has over CSV is that you can put multiple data types into a single file. Of course, if you have lots of data, this advantage could turn into a disadvantage as the file size balloons! Weigh your choice carefully.

The Migrate Plus module has a data parser for XMl, so if you’ve been following along with our series so far, you should already have this capability installed.

Much like CSV, you will have to connect to a file, rather than a database. RSS is a commonly used xml format, so we’ll walk through connecting to an RSS file for our example. I pulled some data from Phase2’s own blog RSS for our use, too.

https://www.phase2technology.com/ideas/rss.xml (Abbreviated)




  1. <?xml version="1.0" encoding="utf-8"?>

  2. <rss ... xml:base="https://www.phase2technology.com/ideas/rss.xml">

  3.   <channel>

  4.     <title>Phase2 Ideas</title>

  5.     <link>https://www.phase2technology.com/ideas/rss.xml</link>

  6.     <description/>

  7.     <language>en</language>

  8.         <item>

  9.             <title>The Top 5 Myths of Content Migration *plus one bonus fairytale</title>

  10.             <link>https://www.phase2technology.com/blog/top-5-myths-content</link>

  11.             <description>The Top 5 Myths of Content Migration ... </description>

  12.             <pubDate>Wed, 08 Aug 2018 14:23:34 +0000</pubDate>

  13.             <dc:creator>Bonnie Strong</dc:creator>

  14.             <guid isPermaLink="false">1304 at https://www.phase2technology.com</guid>

  15.         </item>

  16.     </channel>

  17. </rss>


example_xml_migrate/config/install/migrate_plus.migration.example_xml_articles.yml




  1. id: example_xml_articles

  2. label: 'Import articles'

  3. status: true

  4. source:

  5.   plugin: url

  6.   data_fetcher_plugin: http

  7.   urls: 'https://www.phase2technology.com/ideas/rss.xml'

  8.   data_parser_plugin: simple_xml

  9.   item_selector: /rss/channel/item

  10.   fields:

  11.     -

  12.       name: guid

  13.       label: GUID

  14.       selector: guid

  15.     -

  16.       name: title

  17.       label: Title

  18.       selector: title

  19.     -

  20.       name: pub_date

  21.       label: 'Publication date'

  22.       selector: pubDate

  23.     -

  24.       label: 'Origin link'

  25.       selector: link
  26.     -

  27.       name: summary

  28.       label: Summary

  29.       selector: description

  30.   ids:

  31.     guid:

  32.       type: string

  33. destination:

  34.   plugin: 'entity:node'

  35. process:

  36.   title:

  37.     plugin: get

  38.     source: title

  39.   field_remote_url: link
  40.   body: summary

  41.   created:

  42.     plugin: format_date

  43.     from_format: 'D, d M Y H:i:s O'

  44.     to_format: 'U'

  45.     source: pub_date

  46.   status:

  47.     plugin: default_value

  48.     default_value: 1

  49.   type:

  50.     plugin: default_value

  51.     default_value: article


The key bits here are in the source configuration.




  1. source:

  2. plugin: url

  3. data_fetcher_plugin: http

  4. urls: 'https://www.phase2technology.com/ideas/rss.xml'

  5. data_parser_plugin: simple_xml

  6. item_selector: /rss/channel/item


Much like CSV’s use of the csv plugin to read a file, XML is not using the d7_node or d7_taxonomy_term plugin to read the data. Instead, it’s pulling in a url and reading the data it finds there. The data_fetcher_plugin takes one of two different possible values, either http or file. HTTP is for a remote source, like an RSS feed, while File is for a local file. The urls config should be pretty obvious.

The data_parser_plugin specifies what php library to use to read and interpret the data. Possible parsers here include JSON, SOAP, XML and SimpleXML. SimpleXML’s a great library, so we’re using that here.

Finally, item_selector defines where in the XML the items we’re importing can be found. If you look at our data example above, you’ll see that the actual nodes are in rss -> channel -> item. Each node would be an item.




  1.  fields:

  2. ...

  3.     -

  4.       name: pub_date

  5.       label: 'Publication date'

  6.       selector: pubDate

  7. ...


Here you see one of the fields from the xml. The label is just a human-readable label for the field, while the selector is the field within the XML item we’re getting.

The name is what we’ll call a pseudo-field. A pseudo-fields acts as a temporary storage for data. When we get to the Process section, the pseudo-fields are treated essentially as though they were fields in a database.

We’ve seen pseudo-fields before, when we were migrating taxonomy fields in Drupal 8 Migrations: Taxonomy and Nodes. We will see why they are important here in a minute, but there’s one more important thing in source.




  1.  ids:

  2.     guid:

  3.       type: string


This snippet here sets the guid to be a unique of the article we’re importing. This guarantees us uniqueness and is very important to specify.

Finally, we get to the process section.




  1. process:

  2. ...

  3. created:

  4. plugin: format_date

  5. from_format: 'D, d M Y H:i:s O'

  6. to_format: 'U'

  7. source: pub_date

  8. ...


So, here is where we’re using the pseudo-field we set up before. This takes the value from pubDate that we stored in the pseudo-field pub_date, does some formatting to it, and assigns it to the created field in Drupal. The rest of the fields are done in a similar fashion.

Destination is set up exactly like a Drupal-to-Drupal migration, and the whole thing is run with Drush the exact same way. Since RSS is a feed of real-time content, it would be easy to set up a cron job to run that drush command, add the --update flag, and have this migration go from one-time content import to being a regular update job that kept your site in sync with the source.

Migration from WordPress

WordPress export screenshotA common migration path is from WordPress to Drupal. Phase2 recently did so with our own site, and we have done it for clients as well. There are several ways to go about it, but our own migration used the WordPress Migrate module.

In your WordPress site, under Tools >> Export, you will find a tool to dump your site data into a customized xml format. You can also use the wp-cli tool to do it from the command line, if you like.

Once you have this file, it becomes your source for all the migrations. Here’s some good news: it’s an XML file, so working with it is very similar to working with RSS. The main difference is in how we specify our source connections.

example_wordpress_migrate/config/install/migrate_plus.migration.example_wordpress_authors.yml




  1. langcode: en

  2. status: true

  3. dependencies:

  4.   enforced:

  5.     module:

  6.       - phase2_migrate

  7. id: example_wordpress_authors

  8. class: null

  9. field_plugin_method: null

  10. cck_plugin_method: null

  11. migration_tags:

  12.   - example_wordpress

  13.   - users

  14. migration_group: example_wordpress_group

  15. label: 'Import authors (users) from WordPress WXL file.'

  16. source:

  17.   plugin: url

  18.   data_fetcher_plugin: file
  19.   data_parser_plugin: xml

  20.   item_selector: '/rss/channel/wp:author'

  21.   namespaces:

  22.     wp: 'http://wordpress.org/export/1.2/'

  23.     excerpt: 'http://wordpress.org/export/1.2/excerpt/'

  24.     content: 'http://purl.org/rss/1.0/modules/content/'

  25.     wfw: 'http://wellformedweb.org/CommentAPI/

  26.     dc: 'http://purl.org/dc/elements/1.1/'

  27.   urls:

  28.     - 'private://example_output.wordpress.2018-01-31.000.xml'

  29.   fields:

  30.     -

  31.       name: author_login

  32.       label: 'WordPress username'

  33.       selector: 'wp:author_login'

  34.     -

  35.       name: author_email

  36.       label: 'WordPress email address'

  37.       selector: 'wp:author_email'

  38.     -

  39.       name: author_display_name

  40.       label: 'WordPress display name (defaults to username)'

  41.       selector: 'wp:author_display_name'

  42.     -

  43.       name: author_first_name

  44.       label: 'WordPress author first name'

  45.       selector: 'wp:author_first_name'

  46.     -

  47.       name: author_last_name

  48.       label: 'WordPress author last name'

  49.       selector: 'wp:author_last_name'

  50.   ids:

  51.     author_login:

  52.       type: string

  53. process:

  54.   name:

  55.     plugin: get

  56.     source: author_login

  57.     plugin: get

  58.     source: author_email

  59.   field_display_name

  60.     plugin: get

  61.     source: author_display_name

  62.   field_first_name:

  63.     plugin: get

  64.     source: author_first_name

  65.   field_last_name:

  66.     plugin: get

  67.     source: author_last_name

  68.   status:

  69.     plugin: default_value

  70.     default_value: 0

  71. destination:

  72.   plugin: 'entity:user'

  73. migration_dependencies: null


If you’ve been following along in our series, a lot of this should look familiar.




  1. source:

  2. plugin: url

  3. data_fetcher_plugin: file
  4. data_parser_plugin: xml

  5. item_selector: '/rss/channel/wp:author'


This section works just exactly like the XML RSS example above. Instead of using http, we are using file for the data_fetcher_plugin, so it looks for a local file instead of making an http request. Additionally, due to the difference in the structure of an RSS feed compared to a WordPress WXL file, the item_selector is different, but it works the same way.




  1.     namespaces:

  2.       wp: 'http://wordpress.org/export/1.2/'

  3.       excerpt: 'http://wordpress.org/export/1.2/excerpt/'

  4.       content: 'http://purl.org/rss/1.0/modules/content/'

  5.       wfw: 'http://wellformedweb.org/CommentAPI/'

  6.       dc: 'http://purl.org/dc/elements/1.1/'


These namespace designations allow Drupal’s xml parser to understand the particular brand and format of the Wordpress export.




  1.    urls:

  2.       - 'private://example_output.wordpress.2018-01-31.000.xml'


Finally, this is the path to your export file. Note that it is in the private filespace for Drupal, so you will need to have private file management configured in your Drupal site before you can use it.




  1. fields:

  2. -

  3. name: author_login

  4. label: 'WordPress username'

  5. selector: 'wp:author_login'


We’re also setting up pseudo-fields again, storing the value from wp:author_login in author_login.

Finally, we get to the process section.




  1. process:

  2. name:

  3. plugin: get

  4. source: author_login


So, here is where we’re using the pseudo-field we set up before. This takes the value from wp:author_login that we stored in author_login and assigns it to the name field in Drupal.

Configuration for the migration of the rest of the entities - categories, tags, posts, and pages - look pretty much the same. The main difference is that the source will change slightly:

example_wordpress_migrate/config/install/migrate_plus.migration.example_wordpress_category.yml  (abbreviated)




  1. source:

  2. ...

  3. item_selector: '/rss/channel/wp:category'


example_wordpress_migrate/config/install/migrate_plus.migration.example_wordpress_tag.yml (abbreviated)




  1. source:

  2. ...

  3. item_selector: '/rss/channel/wp:tag'


example_wordpress_migrate/config/install/migrate_plus.migration.example_wordpress_post.yml (abbreviated)




  1. source:

  2. ...

  3. item_selector: '/rss/channel/item[wp:post_type="post"]'


And, just like our previous two examples, Wordpress migrations can be run with Drush.

A cautionary tale

As we noted in Managing Your Drupal 8 Migration, it’s possible to write custom Process Plugins. Depending on your data structure, it may be necessary to write a couple to handle values in these fields. On the migration of Phase2’s site recently, after doing a baseline test migration of our content, we discovered a ton of malformed links and media entities. So, we wrote a process plugin that did a bunch of preg_replace to clean up links, file paths, and code formatting in our body content. This was chained with the default get plugin like so:




  1. process:

  2. body/value:

  3. -

  4. plugin: get

  5. source: content

  6. -

  7. plugin: p2body


The plugin itself is a pretty custom bit of work, so I’m not including it here. However, a post on custom plugins for migration is in the works, so stay tuned.

Useful Resources and References

If you’ve enjoyed this series so far, we think you might enjoy a live version, too! Please drop by our session proposal for Drupalcon Seattle, Moving Out, Moving In! Migrating Content to Drupal 8 and leave some positive comments.

Oct 29 2018
Oct 29

Yesterday, big tech tripped over itself with IBM’s Red Hat acquisition--for the staggering sum of $34B. Many were shocked by the news, but those that know Red Hat well--may have been less surprised. Long the leader and largest open source company in the world: Red Hat has been getting it right for many years.

Still more shocking is how this fits an albeit new pattern for 2018 and beyond. One which is completely different than the typical enterprise software acquisition of the past. Red Hat is not the first mega tech deal of the year for the  open source community. (There was the $7.5B purchase of GitHub by Microsoft, and recently the $5.2B merger of big-data rivals Cloudera and Hortonworks.)

Now, with this much larger move by IBM, it brings us to consider the importance of open source value, and contribution culture-at-large.

This was a great acquisition target for IBM:

  • They have a powerful product suite for some of the more cutting edge aspects of web development including a secure and fully managed version of Linux, hybrid cloud, containerization technology and a large and satisfied customer base;

  • their products and technologies fit perfectly against IBM’s target market of enterprise digital transformation; and

  • the deal opens up a huge market to Red Hat via Big Blue.

And in the age we live--one focused on (and fearful of) security, privacy, data domiciles, and crypto tech--a $14B valuation, over market cap (a premium of $74/share), is a validation of the open source model shining sunlight on software to achieve more secure products.

At Phase2, this news comes with much interest. Red Hat is a company that we know very well for its contributions to open source and web technology, in general. We have worked with Red Hat since 2013 and come to respect them in several key ways.

As pioneers in the commercialization of open source, Red Hat popularized and legitimized the idea that the concept of open contribution and financial gain can co-exist. While our own experimentations with productization of open source over the years within the Drupal community were certainly less publicized, we, and ostensibly the ‘industry’, looked to Red Hat as the archetype for a modern business model that could work.

We’ve had the privilege of working for, and alongside, the Red Hat team to develop many of the company’s websites over the last five years, including Redhat.com and developers.redhat.com. Through these experiences, we have come to value the way in which they blend great talent, great culture, and open values.

On many occasions, we have even drawn parallels between their business culture and our own. After reading the Open Organization by Red Hat CEO Jim Whitehurst, I was struck by the values and culture of Red Hat and their similarities with how Phase2 similarly side-eyes the future. Perhaps it was their open source ethos, collaborative approach, or the meritocracy (vs. democracy or autocracy) they fostered, but I felt like we were emulating a “big brother”.

FInally, but perhaps most importantly, we respect them as a business. The pure fact that a larger-than-life brand like IBM would pay such a premium implies both strategic and business health. I believe that,  while in part it is earned from a strong repeatable subscription-based revenue stream, nothing creates business value like a great culture of amazing people, dependable customers, and undeniable innovation.

And now with IBM’s extended reach and additional resources, we look forward to Red Hat’s continued success and partnership.

Oct 01 2018
Oct 01

One of the most exciting additions to Drupal 8.6 is the new experimental Layout Builder. Many are focused on Layout Builder replacing Panels, Panelizer, Display Suite, and even Paragraphs. The clean and modular architecture of Layout Builder supports a multitude of different use cases. It can even be used to create a WYSIWYG Mega Menu experience.

Note: Experimental

While Layout Builder was first added as experimental to Drupal 8.5, it has changed significantly since and is now considered more "beta" than "alpha". While still technically experimental and not officially recommended for production sites, the functionality and architecture has stabilized with Drupal 8.6 and it's time to start evaluating it more seriously.

What is a Mega Menu?

For the purposes of this discussion, I'll define a "Mega Menu" as simply a navigation structure where each item in the menu can expand to show a variety of different components beyond a simple list of links.

In the above example example, we see a three column menu item with two submenus, a search form, and a piece of static content (or reference to another node).

Mega Menus present many challenges for a site including accessibility, mobile responsiveness, governance and revision moderation, etc. While I don't advocate the use of mega menus, sometimes they are an unavoidable requirement.

Past Solutions

I've seen many different implementations of Mega Menus over the years.

  • Modules such as we_megamenu (D8),  tb_megamenu (D7), etc.
  • Custom blocks (D8),
  • Hard-coded links, node references, and Form API rendered in theme,
  • MiniPanels rendered in the theme (D7)
  • Integrations with Javascript libraries such as Superfish
  • Custom site-specific code

These solutions had many problems and often didn't provide any easy way for site owners to make changes. Often these solutions caused headaches when migrating the site or supporting it over a long life cycle. I've known many teams who simply groan when a client mentions "we want mega menus."

Wouldn't it be nice if there was a consistent way in Drupal 8 to create and manage these menus with a component-based design architecture?

Layout Builder

The Layout Builder module can take control over the rendering of an entity view mode. Normally in Drupal, a view mode is just a list of fields you want to display, and in which order. These simplistic lists of fields are usually passed to a theme template responsible for taking the raw field data and rendering it into the designed page.

With Layout Builder, a view mode consists of multiple "sections" that can contain multiple "blocks." A "Section" references a specific "Layout" (2 column, 3 column, etc). Each field of the entity can be displayed via a new field_block. Thus, a traditional view mode is just a single section with a one-column layout filled with a block for each field to be displayed.

The core Layout Discovery module is used to locate the available "layouts" on the site that can be assigned to a Section. Core comes with one column, two column, and three column (33/33/33 and 25/50/25) layouts. Custom layout modules can be easily created to wrap a twig template for any layout needed within a section.

Blocks for each field can be added to a section, along with any other predefined or custom block on the site. Core also provides "inline blocks" that are instances of custom blocks referenced by the node but not shown in the global block layout admin view.

When an inline block is edited, a new revision of the block is created and a new revision of the node entity is created to reference it, allowing layout changes to be handled with the same workflow as normal content changes.

Section Storage

Layout Builder uses a Section Storage plugin to determine how the list of block uuids referenced in a layout section are stored. The default layout for a content type is stored in the third_party_settings for the specific view mode configuration. If node-specific overrides are enabled for the bundle, the overriding list of blocks in the section are stored within a layout_builder__layout field added to the node.

While the use of Layout Builder is focused on Nodes (such as Landing Pages), the Layout Builder architecture actually works with any entity type that supports the Section Storage. Specifically, any entity that is "fieldable" is supported.

Fieldable Menu Items

If Layout Builder works with any fieldable entity, how can we make a Menu Item entity fieldable? The answer is the menu_item_extras contrib module. This module allows you to add fields to a menu entity along with form and display view modes. For example, you can add an "icon" image field that will be displayed next to the menu link.

The Menu Item Extras module has been used in Drupal 8 for a while to implement mega menus via additional fields. However, in Drupal 8.6 you don't need to add your own fields, you just need to enable Layout Builder for the default menu item view display mode:

When you allow each menu link to have its layout customized, a layout_builder__layout field is added to the menu item to store the list of blocks in the sections. When you Add a Link to your menu, a new tab will appear for customizing the layout of the new menu link item:

The Layout tab will show the same Layout Builder UI used to create node landing pages, except now you are selecting the blocks to be shown on the specific menu item. You can select "Add Section" to add a new layout, then "Add Block" to add blocks to that section.

In the example above I have used the optional Menu Blocks module to add submenus of the Drupal Admin menu (Structure and Configuration) to the first two columns (default core menu blocks do not allow the parent to be selected, but the Menu Block contrib module adds that). In third column the Search Form block was added, and below that is an "Inline Block" using the core Basic Block type to add static text to the menu item.

Theming the Menu

The Menu Item Extras module provides twig templates for handling the display of the menu item. Each menu item has a "content" variable that contains the field data of the view mode, just like with any node view mode.

Each theme will need to decide how best to render these menus. Using a subtheme of the Bootstrap theme I created the following menu-levels.html.twig template to render the example shown at the beginning of this article:

<ul{{ attributes.addClass(['menu', 'menu--' ~ menu_name|clean_class, 'nav', 'navbar-nav']) }}>
 {% for item in items %}
   {% set item_classes = [
     item.is_expanded ? 'expanded',
     item.is_expanded and menu_level == 0 ? 'dropdown',
     item.in_active_trail ? 'active',
     ]
   %}
   <li{{ item.attributes.addClass(item_classes) }}>
     <a href="https://www.phase2technology.com/blog/creating-mega-menu-layout-builder/{{ item.url }}" class="dropdown-toggle" data-toggle="dropdown">{{ item.title }} <span class="caret"></span></a>
     <div class="dropdown-menu dropdown-fullwidth">
       {{ item.content }}
     </div>
   </li>
 {% endfor %}
</ul>

Summary

The combination of Layout Builder and Menu Item Extras provides a nearly WYSIWYG experience for site owners to create complex mega menus from existing block components. While this method still requires a contrib module, the concept of making a menu item entity fieldable is a clean approach that could easily find its way into core someday. Rather than creating yet another architecture and data model for another "mega menu module", this approach simply relies on the same entity, field, and view mode architecture used throughout Drupal 8.

While Layout Builder is still technically "experimental", it is already very functional. I expect to see many sites start to use it in the coming months and other contrib modules to enhance the experience (such as Layout Builder Restrictions) once more developers embrace this exciting new functionality in Drupal core.

My thanks to the entire team of developers who have worked on the Layout Initiative to make Layout Builder a reality and look forward to it being officially stable in the near future.

Jul 17 2018
Jul 17

If you're not familiar with GatsbyJS, then you owe it to yourself to check it out. It's an up and coming static site generator with React and GraphQL baked in, and it prides itself on being really easy to integrate with common CMS'es like Drupal.

In other words, Gatsby lets you use Drupal as the backend for a completely static site. This means you get a modern frontend stack (React, GraphQL, Webpack, hot reloading, etc.) and a fully static site (with all of the performance and security benefits that come along with static sites) while still keeping the power of Drupal on the backend. 

Let's give it a shot! In this post, we'll see just how simple it is to use Drupal 8 as the backend for a Gatsby-powered static site. 

Step 1: Set up Drupal

This step is super easy. You basically just have to install and configure the JSON API module for Drupal 8, and you're done. 

First off (assuming you already have a Drupal 8 site running), we'll just download and install the JSON API module.

composer require drupal/jsonapi
drupal module:install jsonapi

Now we just have to make sure we grant anonymous users read permission on the API. To do this, go to the permissions page and check the "Anonymous users" checkbox next to the "Access JSON API resource list" permission. If you skip this step, you'll be scratching your head about the endless stream of 406 error codes.

After this you should be all set. Try visiting http://YOURSITE.com/jsonapi and you should see a list of links. For example, if you have an "Article" content type, you should see a link to http://YOURSITE.com/jsonapi/node/article, and clicking that link will show you a JSON list of all of your Article nodes.

Working? Good. Let's keep moving.

Step 2: Install GatsbyJS

Now we need to work on Gatsby. If you don't have it installed already, run this to grab it:

npm install --global gatsby-cli

That'll give you the "gatsby" cli tool, which you can then use to create a new project, like so:

gatsby new YOURSITENAME

That command basically just clones the default Gatsby starter repo, and then installs its dependencies inside it. Note that you can include another parameter on that command which tells Gatsby that you want to use one of the starter repos, but to keep things simple we'll stick with the default.

Once complete, you have the basis for a working Gatsby site. But that's not good enough for us! We need to tell Gatsby about Drupal first.

Step 3: Tell Gatsby about Drupal

For this part, we'll be using the gatsby-source-drupal plugin for Gatsby. First, we need to install it:

cd YOURSITENAME
npm install --save gatsby-source-drupal

Once that's done, we just need to add a tiny bit of configuration for it, so that Gatsby knows the URL of our Drupal site. To do this, edit the gatsby-config.js file and add this little snippet to the "plugins" section:

plugins: [
  {
    resolve: `gatsby-source-drupal`,
    options: {
      baseUrl: `http://YOURSITE.COM`,
    },
  },
]

You're all set. That's all the setup that's needed, and now we're ready to run Gatsby and have it consume Drupal data.

Step 4: Run Gatsby

Let's kick the tires! Run this to get Gatsby running:

gatsby develop

If all goes well, you should see some output like this:

You can now view gatsby-starter-default in the browser.

  http://localhost:8000/

View GraphiQL, an in-browser IDE, to explore your site's data and schema

  http://localhost:8000/___graphql

Note that the development build is not optimized.
To create a production build, use gatsby build

(If you see an error message instead, there's a good chance your Drupal site isn't set up correctly and is erroring. Try manually running "curl yoursite.com/jsonapi" in that case to see if Drupal is throwing an error when Gatsby tries to query it.)

You can load http://localhost:8000/ but you won't see anything particularly interesting yet. It'll just be a default Gatsby starter page. It's more interesting to visit the GraphQL browser and start querying Drupal data, so let's do that.

Step 5: Fetching data from Drupal with GraphQL

Load up http://localhost:8000/___graphql in a browser and you should see a GraphQL UI called GraphiQL (pronounced "graphical") with cool stuff like autocomplete of field names and a schema explorer. 

Clear out everything on the left side, and type an opening curly bracket. It should auto-insert the closing one for you. Then you can hit ctrl+space to see the autocomplete, which should give you a list of all of the possible Drupal entity types and bundles that you can query. It should look something like this:

Entity autocomplete

For example, if you want to query Event nodes, you'll enter "allNodeEvent" there, and drill down into that object.

Here's an example which grabs the "title" of the Event nodes on your Drupal site:

{
  allNodeEvent {
    edges {
      node {
        title
      }
    }
  }
}

Note that "edges" and "node" are concepts from Relay, the GraphQL library that Gatsby uses under the hood. If you think of your data like a graph of dots with connections between them, then the dots in the graph are called “nodes” and the lines connecting them are called “edges.” You don't need to worry about this at the moment. For now, just get used to typing it.

Once you have that snippet written, you can click the play icon button at the top to run it, and you should see a result like this on the right side:

{
  "data": {
    "allNodeEvent": {
      "edges": [
        {
          "node": {
            "title": "Test node 1"
          }
        },
        {
          "node": {
            "title": "Test node 2"
          }
        },
        {
          "node": {
            "title": "Test node 3"
          }
        }
      ]
    }
  }
}

Note that this same pattern can give you pretty much any data you want from Drupal, including entity reference field data or media image URIs, etc. As a random example, here's a snippet from the Contenta CMS + GatsbyJS demo site:

{
  allNodeRecipe {
    edges {
      node {
        title
        preparationTime
        difficulty
        totalTime
        ingredients
        instructions
        relationships {
          category {
            name
          }
          image {
            relationships {
              imageFile {
                localFile {
                  childImageSharp {
                    fluid(maxWidth: 470, maxHeight: 353) {
                      ...GatsbyImageSharpFluid
                    }
                  }
                }
              }
            }
          }
        }
      }
    }
  }
}

Pretty cool right? Everything you need from Drupal, in one GraphQL query.

So now we have Gatsby and Drupal all set up and we know how to grab data from Drupal, but we haven't actually changed anything on the Gatsby site yet. Let's fix that.

Step 6: Displaying Drupal data on the Gatsby site

The cool thing about Gatsby is that GraphQL is so baked in that it assumes that you'll be writing GraphQL queries directly into the components/templates.

In your codebase, check out src/pages/index.js and you should see some placeholder content. Delete everything in there, and replace it with this:

import React from 'react'

class IndexPage extends React.Component {

  render() {
    const pages = this.props.data.allNodePage.edges
    const pageTitles = pages.map(page => <li>{page.node.title}</li>)
    return <ul>{pageTitles}</ul>
  }
}

export default IndexPage

export const query = graphql`
  query pageQuery {
    allNodePage {
      edges {
        node {
          title
        }
      }
    }
  }
`

(Note, this assumes you have a node type named "Page"). 

All we're doing here is grabbing the node titles via the GraphQL query at the bottom, and then displaying them in a bulleted list. 

Here's how that looks on the frontend:

Gatsby showing node titles

And that's it! We are displaying Drupal data on our Gatsby site!

Step 7: Moving on

From here, you'll probably want to look at more complex stuff like creating individual pages in Gatsby for each Drupal node, or displaying more complicated data, or resizing Drupal images, etc. Go to it! The static site is your oyster! 

When you're happy with it, just run "gatsby build" and it'll export an actual static site that you can deploy anywhere you want, like Github Pages or Amazon S3 or Netlify.

Have fun!

Jul 12 2018
Jul 12

One of the most fundamental tasks of back-end Drupal 8 development is learning how to capture and utilize data. Unfortunately, as a new developer, trying to do so feels like wandering into an endless labyrinth of arrays, methods, objects, and arcane wizardry.

Say you want to get the summary off of a body field so you try something like $node->values['field_article_body'][0]['summary'], but  that doesn’t work. So you remember that you probably need to use a get method and you remember seeing something like $node->getValue('field_article_body') before, but that doesn’t work either.

Suddenly you find yourself lost in the labyrinth and desperately hoping for one of your guesses to be correct only to eventually get eaten by a minotaur (read: get frustrated and give up). Now, If you remember your Greek mythology, the way Theseus was able to triumph over the labyrinth where others had failed was by tying some twine to himself so that he could retrace his steps. The point of this blog post is to give you that twine so that next time you don’t have to guess your way to the data.

Remember your training

First, remember that D8 is based on object oriented programming (OOP) and think about what that really means. While it is indeed intricate, at its core it’s really just a series of classes and objects that extend off of each other. Plugins, entities, blocks, services, etc. might sound like complex concepts, but at the end of the day these are all just different kinds of classes with different uses and rulesets.

For a long time I was conscious of D8’s OOP nature, but I really only thought of it in terms of building new things from that foundation. I never thought about this crucial principle when I was trying to pull data out of the system, and pulling in my OOP knowledge was the first step in solving this problem.

Down into the labyrinth

Let’s take a simple example. Say you have the following node loaded and you want to use the title of the node.

(note: these screenshots are from xdebug, but you can get the same information by printing the variables to the page using var_dump() or kint())

After digging around in the node you find the title here:

But as we’ve already established, something like $node->values['title'] won’t work. This is because $node is  not simply an array, it’s a full object. Near the top you’ll notice that Xdebug is telling us exactly what class is creating the object below, Drupal\node\Entity\Node. If you go to that file you will see the following method on that class that will get you the data that you need:

public function getTitle() {
  return $this->get('title')->value;
}

Meaning, you can just run $node->getTitle() to get that nodes’s title. Notice the host of other useful functions there as well, getCreatedTime(), getOwner() ,postSave(). All of these methods and more are available and documented for when you want to manipulate that node.

These aren’t the only methods you have available to you. In fact, if you look at the actual code in the getTitle() function you’ll see that it’s using a get method that’s nowhere to be found in this class. The rules of OOP suggest that if the method is useable but not in the class itself it’s probably being extended from another class. In fact, the class declaration for node extends EditorialContentEntityBase, which might not have anything useful on its own but it does extend ContentEntityBase which holds a plethora of useful methods, including the aforementioned get function!

public function get($field_name) {
  if (!isset($this->fields[$field_name][$this->activeLangcode])) {
    return $this->getTranslatedField($field_name, $this->activeLangcode);
  }
  return $this->fields[$field_name][$this->activeLangcode];
}

Notice how this get method seems to be designed for getting field values, we could probably get closer to the summary value I mentioned earlier by going $node->get(‘field_article_body'). If you run that method you get a different object entirely, a FieldItemList.

Once again, we can dig through the provided classes and available methods. FieldItemList extends ItemList which has a getValue() method which gets us even closer.

Now, instead of an object, we’re returning a simple array, which means we can use regular array notation to finally get that summary value: $node->get('field_article_body')->getvalue()[0]['summary'].

So what did we actually do?

Pay special attention to the structure of this final call we’re using. Parsing it out like so demonstrates that it’s no mere guess-and-check, rather it’s a very logical sequence of events.

/** @var \Drupal\node\Entity\Node $field */
$field = $node->get('field_article_body'); // Object notation to get a property from an object

/** @var \Drupal\Core\Field\FieldItemList $field_values*/
$field_values = $field->getvalue(); // Object notation to get a property from an object

/** @var array $summary */
$summary = $field_values[0]['summary']; // Array notation to get the value from an array

This also makes it obvious why our previous attempt of $node->values['title'] can’t work. It’s trying to get a values property off node object, when such a thing doesn’t exist in the node class declaration.

Rule-breaking magic!

That being said, another perfectly valid way to get the summary field is $node->field_article_body->summary. Now, on first glance, this appears to contradict what I just said. The $node object obviously doesn’t have a field_article_bodyproperty in its class declaration. The reason this works is because it is successfully calling a magic method. PHP has  a number of magic methods that can always be called on  an object, these methods are easy to find because they start with a double underscore (__set(), __get(), __construct(), etc.). In this case, since we’re attempting to call a property that does not exist on the class, Drupal knows to use the __get()magic method to look at the properties on this instance of the object, and this instance is a node with a field named field_article_bodyand by definition a property of the same name. If you look further down the ContentEntityBase class you’ll see this very &__get()method.

PHPstorm Shortcuts

It’s worth noting that if you’re writing your code in an IDE like PHPstorm, this whole process gets a lot easier because the autocomplete will show you all the methods that are available to you regardless of which class they come from. That being said, being able to manually drill down through the classes is still useful for when you need a clearer idea of what the methods are actually doing.

Another amazing PHPstorm tool is being able to jump directly to the declaration of a method to see where it’s coming from. For instance in the aforementioned getTitle() method, you can right click the contained get() method and select Go To > Declaration to jump directly to the section of ContentEntityBase where that get() method is first declared.

Whats Next?

Don’t worry if this doesn’t make perfect sense your first time through, this is a complex system. It might help to think of this as simply reverse Object Oriented Programming. Rather than using OOP to build something new, you’re reverse-engineering it to grab something that already exists. I recommend messing around with this technique in your own Drupal environment. Try it with different object types, try it in different contexts (for instance, in an alter hook versus in a plugin). The more practice you get, the more comfortable you’ll feel with this essential technique.

Thanks for reading Part 1 of my Backend Drupal 8 series. Check back soon for Part 2, where I’m going to try to convince you to start using Xdebug.

Apr 16 2018
Apr 16

There’s no doubt that the digital landscape looks very different these days. When we talk about an organization's digital presence we are talking about a whole lot more than websites or content management systems.  

At Drupalcon Nashville, we got down to business with our Drupal community, partners and clients to discuss where Drupal fits into this new digital ecosystem, customer experience trends, Drupal 8 best practices, and how to maintain a competitive digital experience platform in this fast-moving, ever-changing market.

The Customer Experience Landscape

61% think that chatbots allow for faster resolution for customer service answers.

Source: Aspect Software Study

Almost 3⁄4 of regular voice technology users believe brands should have unique voices and personalities for their apps.

Source: SONARTM, J. Walter Thompson's proprietary in-house research tool

In short, audiences want to engage with brands on the channels they are already using. These customer experience (CX) expectations are driving channel explosion. With the proliferation of channels and new digital touchpoints, organizations are forced to undergo digital transformation to stay relevant and competitive. Our own Jeff Walpole addressed these market trends in his Drupalcon session: Beyond Websites: The Digital Experience Platform

Massive Discrepancy Between Brands and Consumers

A recent eConsultancy report indicated that 81% of consumer brands believe they have a holistic view of their customers. Conversely, only 37% of consumers feel that they are actually understood by their favorite brands.

It’s clear that understanding the customer and their experience with your brand is essential to developing a competitive presence.  This year, our Drupalcon booth theme addressed this directly with the rallying cry: “Create the experience. Deliver the results.” We asked Drupalcon attendees to engage with our interactive data visualization board to crowd-source the community’s thoughts on the impact of customer experience.

Phase2, Director of Marketing discussing customer experience at DrupalCon Booth

As we explained in our booth, leveraging engagement data is essential to successful customer experience. We took the booth experience further by creating a digital experience that seamlessly flowed from the in-person experience to our attendees' phones using Acquia Journey, a customer journey orchestration tool allowing us to serve up a personalized experience for each attendee.

[embedded content]

The Need for Drupal to Evolve

Just as brands need to evolve to meet the demands of their customers, Drupal needs to evolve to engage the right audience and compete with digital experience platforms like Adobe. We were thrilled to participate in the Drupal Associations marketing fundraiser to raise funds to support more marketing material for Drupal.

screenshot of Drupal marketing fundraising efforts

 

As we grow and transform with the market, culture becomes more important than ever. Our own culture expert, Nicole Lind was a featured speaker this year discussing Why Building Awesome Culture is Essential, Not Just a Nice-to-Have

Drupalcon is always an inspirational and energizing event, with a week full of great sessions, critical discussions and perhaps too much hot chicken. We look forward to our continuing work with the community building the impactful digital experience platforms with Drupal.
 

Feb 13 2018
Feb 13

In this post, we’ll begin to talk about the development considerations of actual website code migration and other technological details. In these exercises, we’re assuming that you’re moving from Drupal 6 or 7 to Drupal 8. In a later post, I will examine ways to move other source formats into Drupal 8 - including CSV files, non-Drupal content management systems, or database dumps from weird or proprietary frameworks.

Migration: A Primer

Before we get too deep into the actual tech here, we should probably take a minute to define some terms and explain what’s actually happening under the hood when we run a migration, or the rest of this won’t make much sense.

When we run a migration, what happens is that the Web Server loads the content from the old site, converts it to a Drupal 8 format, and saves it in the new site.  Sounds simple, right?

Actually, it pretty much is that simple. At least, conceptually. So, try to keep those three steps in mind as we go through the hard stuff later. Everything we do is designed to make one of those three steps work.

Key Phrases

  • Migration: The process of moving content from one site to another. ‘A migration’ typically refers to all the content of a single content or entity type (in other words, one node type, one taxonomy, and so on).

  • Migration Group: A collection of Migrations with common traits

  • Source: The Drupal 6 or 7 database from which you’re drawing your content (or other weird source of data, if applicable)

  • Process: The stuff that Drupal code does to the data after it’s been loaded, in order to digest it into a format that Drupal 8 can work with

  • Destination: The Drupal 8 site

Interestingly, each of those key phrases above corresponds directly to a code file that’s required for migration. Each Migration has a configuration (.yml) file, and each is individually tailored for the content of that entity. As config files, each of these is pretty independant and not reusable. However, we can also assign them to Migration Groups. Groups are also configuration (.yml) files. They allow us to declare common configurations once, and reuse them in each migration that belongs to that group.

The Source Plugin code is responsible for doing queries to the Source database, retrieving the data, and formatting it into PHP objects that can be worked on. The Process Plugin takes that data, does stuff to it, and passes it to the next step. The Destination Plugin then saves it in Drupal 8 format.  Rinse, repeat.

On a Drupal-to-Drupal migration, around 75% of your time will be spent working in the Migration or Migration Group config, declaring the different Process Plugins to use. You may wind up writing one or more Process Plugins as part of your migration development, but a lot of really useful ones are included in Drupal core migration code and are documented here. A few more are included with Migrate Plus.

Drupal 8 core has Source Plugins for all standard Drupal 6 and Drupal 7 entity types (node, taxonomy, user, etc.). The only time you’ll ever need to write a Source plugin is for a migration from a source other than Drupal 6 or 7, and many of these are already available as Contrib modules.

Also included in Drupal core are Destination Plugins for all of the core entity types. Unless you’re using a custom entity in Drupal 8, and migrating data into that entity, you’ll probably never write a Destination Plugin.

Development Foundations

There are a few key requirements you need to have in place before you can begin development.  First, and probably foremost, you need to have both your Drupal 6/7 and Drupal 8 sites - the former full of all your valuable content, and the latter empty of everything but structure.

An important note: though the completed migration will be run on your production server, you should be using development environments for this work. At Phase2, we use Outrigger to simplify and standardize our dev and production environments.

For migration purposes, we only actually need the Drupal 7 site’s database itself, in a place that’s accessible to the destination site.  I usually take an SQL dump from production, and install it as an additional database on the same server as the destination, to avoid network latency and complicated authentication requirements. Obviously, unless you freeze content for the duration of the migration development, you’ll have to repeat this process for final content migration on production.

I’d like to reiterate some advice from my last post: I strongly recommend sanitizing user accounts and email addresses on your development databases.  Use drush sql-sanitize and avoid any possibly embarrassing and unprofessional gaffes.

On your Drupal 8 site, you should already have completed the creation of the new content types, based on information you discovered and documented in your first steps.  This should also encompass the creation of taxonomy vocabularies, and any fields on your user entities.

In your Drupal 8 settings.php file, add a second database config array pointed at the Drupal 7 source database.

sites/default/settings.php




  1. $databases['migration_source_db']['default'] = array(
  2.   'database' => 'example_source',

  3. 'username' => 'username',

  4. 'password' => 'password',

  5. 'prefix' => '',

  6. 'host' => 'db',

  7. 'port' => '',

  8. 'namespace' => 'Drupal\Core\Database\Driver\mysql',

  9. 'driver' => 'mysql',

  10. );


Finally, you’ll need to add the migration module suite to your site.  The baseline for migrations is migrate, migrate_drupal, migrate_plus, and migrate_tools.  The Migrate and Migrate Drupal modules are core code. Migrate provides the basic functionality required to take content and put it into Drupal 8.  Migrate Drupal provides code that understands the structure of Drupal 6 and 7 content, and makes it much more straightforward to move content forward within the Drupal ecosystem.

Both Migrate Plus and Migrate Tools are contributed modules available at drupal.org. Migrate Plus, as the name implies, adds some new features, most importantly migration groups. Migrate Tools provides the drush integration we will use to run and rollback migrations.

Drupal 8 core code also provides migrate_drupal_ui, but I recommend against using it. By using Migrate Tools, we can make use of drush, which is more efficient, can be incorporated into shell scripts, and has more clear error messages.

Framing the House

We’ve done the planning and laid the foundations, so now it’s time to start building this house!

We start with a new, custom module.  This can be pretty bare-bones, to start with.

example_migrate/example_migrate.info.yml




  1. type: module

  2. name: 'Example Migrate'

  3. description: 'Example custom migrations'

  4. package: 'Example Migrate'

  5. core: '8.x'

  6. dependencies:

  7. - drupal:migrate

  8. - drupal:migrate_plus

  9. - drupal:migrate_tools

  10. - drupal:migrate_drupal


Within our module folder, we need a config/install directory. This is where all our config files will go.

Migration Groups

The first thing we should make is a general migration group. While it’s possible to put all the configuration into each and every migration you write, I’m a strong believer in DRY programming (Don’t Repeat Yourself).  Migrate Plus gives us the ability to put common configuration into a single file and use it for multiple migrations, so let’s take advantage of that power!

Note the filename we’re using here. This naming convention gives Migrate Plus the ability to find and parse this configuration, and marks it as a migration group.

example_migrate/config/install/migrate_plus.migration_group.example_general.yml




  1. # The machine name of the group, by which it is referenced in individual migrations.

  2. id: example_general

  3. # A human-friendly label for the group.

  4. label: General Imports

  5. # More information about the group.

  6. description: Common configuration for simple migrations.

  7. # Short description of the type of source, e.g. "Drupal 6" or "WordPress".

  8. source_type: Drupal 7 Site

  9. # Here we add any default configuration settings to be shared among all

  10. # migrations in the group.

  11. shared_configuration:

  12. source:

  13. key: migration_source_db
  14. # We add dependencies just to make sure everything we need will be available

  15. dependencies:

  16. enforced:

  17. module:

  18. - example_migrate

  19. - migrate_drupal

  20. - migrate_tools


This is a very simple group that will use for migrations of simple content . Most of the stuff in here is self-descriptive.  However, source is a critical config - it uses the key of the database configuration we added earlier, to give migrate access to that database.  We’ll examine a more complicated migration group another time.

User Migration

In Drupal, users pretty much have their fingers in every pie.  They are listed as authors on content, they are creators of files… you get the picture.  That’s why it’s usually the first migration to get run.

Note again the filename convention here, which allows Migrate Plus to find it, and marks it as a migration (as opposed to a group).

example_migrate/config/install/migrate_plus.migration.example_user.yml




  1. # Migration for user accounts.

  2. id: example_user

  3. label: User Migration

  4. migration_group: example_general

  5. source:

  6. plugin: d7_user

  7. destination:

  8. plugin: entity:user

  9. process:

  10. plugin: get

  11. status: status

  12. name:

  13. -

  14. plugin: get

  15. source: name

  16. -

  17. plugin: dedupe_entity

  18. entity_type: user

  19. field: name

  20. roles:

  21. plugin: static_map

  22. source: roles

  23. map:

  24. 2: authenticated

  25. 3: administrator

  26. 4: author

  27. 5: guest_author

  28. 6: content_approver

  29. created: created

  30. changed: changed

  31. migration_dependencies:

  32. required: { }

  33. dependencies:

  34. enforced:

  35. module:

  36. - example_migrate


Wow! There’s lots of stuff going on here.  Let’s try and break it down a bit.




  1. id: example_user

  2. label: User Migration

  3. migration_group: example_general


The id designation is a standard machine name for this migration.  We will call this with drush to run the migration. Label is a standard human-readable name.  The migration_group should be obvious - it connects this migration to the group we designed above, which means we are now importing all the config in there.  Notably, that connects us to the D7 database.




  1. source:

  2. plugin: d7_user

  3. destination:

  4. plugin: entity:user


Here are two key items.  The source plugin defines where we are getting our data, and what format it’s going to come in.  In this case, we are using Drupal core’s d7_user plugin.

The destination plugin defines what we’re making out of that data, and the format it ends up in.  In this case, we’re using Drupal core’s entity:user plugin.




  1. process:

  2. plugin: get

  3. status: status

  4. name:

  5. -

  6. plugin: get

  7. source: name

  8. -

  9. plugin: dedupe_entity

  10. entity_type: user

  11. field: name

  12. roles:

  13. plugin: static_map

  14. source: roles

  15. map:

  16. 2: authenticated

  17. 3: administrator

  18. 4: author

  19. 5: guest_author

  20. 6: content_approver

  21. created: created

  22. changed: changed


Now we get into the real meat of a migration - the Process section. Each field you’re going to migrate has to be defined here. They are keyed by their field machine name in Drupal 8.  

Each field assigns a plugin parameter, which defines the Process Plugin to use on the data. Each of these process plugins will take a source parameter, and then possibly others.  The source parameter defines the field in the data array provided by the source plugin.  (Yeah, like I’ve said before, naming things clearly isn’t Drupal’s strong suit).

Our first example is mail. Here we are assigning it the get process plugin. This is the easiest process to understand, as it literally takes the data from the old site and gives it to the new site without transforming it in any way. Since email addresses don’t have any formatting changes or necessary transformations, we just move them.

In fact, the get process plugin is Drupal’s default, and our next example shows a shortcut to use it. The status field is getting its data from the old status field. Since get is our default, we don’t even need to actually specify the plugin, and the source is simply implied. See the documentation on drupal.org for more detail.

Name is a slightly more complicated matter.  While usernames don’t change much in their format, we want to make absolutely sure that they are unique.  This leads us to Plugin Chaining, an interesting option that allows us to pass data from one plugin to another, before saving it. The YML array syntax, as demonstrated above, allows us to define more than one plugin for a single field.

We start off by defining the get plugin, which just gets the data from a source field. (You can’t use the default shortcut when you’re chaining, incidentally.)

We then pass it off to the next plugin in the chain, dedupe_entity. This plugin ensures that each record is absolutely certain to be unique.  It has the additional parameters entity_type and field. These define the entity type to check against for uniqueness, and the field in which to look on that entity. See the documentation for more detail.

Note that this usage of dedupe_entity does not specify a source parameter.  That’s because plugin chaining hands off the data from the first plugin in line to the next, becoming, in effect, the source.  It’s very similar to method chaining in jQuery or OOP PHP.  You can chain together as many process plugins as you need, though if you start getting up above four it might be time to re-evaluate what you’re doing, and possibly write a custom processor.

Our final example to examine is roles. User roles in Drupal 7 were keyed numerically, but in Drupal 8 they are based on machine names.  The static_map plugin takes the old numbers, and assigns them to a machine name, which becomes the new value.

The last two process items are changed and created. Like status, they are using the get process plugin, and being designated in the shortcut default syntax.




  1. migration_dependencies:

  2. required: { }

  3. dependencies:

  4. enforced:

  5. module:

  6. - example_migrate


The last two configs are pretty straightforward.  Migration Dependencies are used when a migration requires data from other migrations (we’ll get into that more another time). Dependencies are used when a migration requires a specific additional module to be enabled. In my opinion it’s pretty redundant with the dependencies declared in the module itself, so I don’t use it much.

In the next post, we’ll cover taxonomy migrations and simple node migrations. We’ll also share a really useful tool for migration development.  Thanks for reading!

Jan 25 2018
Jan 25

If you are considering a move to Drupal 8, or upgrading your current Drupal platform, it’s likely that you’ve come across the term “decoupled Drupal”, aka “headless Drupal”. But do you know what it means and what the implications of decoupled Drupal are for marketers? In this guide we will define decoupled Drupal and share three reasons why marketers should consider a decoupled architecture as they evolve their digital experience platforms.

What is Decoupled Drupal?

Think about your favorite department store. You walk in and enjoy the ambiance, interact with the items across departments, maybe chat with an employee, but you never venture into the back of the store. The back of the store exists to house items that employees can access and feature in the departments for customers to see.

With decoupled Drupal, a website visitor will not interact with Drupal directly, much like shoppers do not interact with the back of a brick and mortar store. The visitor will see pages created with Javascript frameworks (like Angular.js, React.js, or Ember), rather than a traditional Drupal theme. The Drupal CMS serves as an interface for editors to enter content, but its primary use is as a store for content.

To put it very simply, decoupled Drupal separates your front end experiences from the back end CMS. Here’s an image to help you visualize the difference between a more traditional Drupal architecture and a decoupled Drupal setup.

image of decoupled Drupal architecture vs. more traditional Drupal setup

If you would like to dive in further, here is a great blog by Dries Buytaert on decoupled Drupal architecture in 2018.

Why Should Marketers Consider Decoupled Drupal?

Make Multi-Platform a Breeze

If you are a large organization with many digital properties to maintain and update, then a decoupled Drupal backend can make your life a lot easier. By serving as content repository, the decoupled CMS allows you to serve up dynamic content across many different places, including mobile apps, voice tech platforms, IOT devices, and to future tech down the road.

Create Beautiful Front End Experiences

It’s no secret that traditional Drupal architectures come with some design limitations that can prevent designers and front-end developers from properly implementing a modern design system that offers exceptional user experience.

Decoupled Drupal facilitates the use of external design systems. In this approach, Drupal is only responsible for gathering data, passing that data to an external design system, and handing over control of the markup to that system, ensuring that your content will present beautifully across all of your digital platforms.

Boost Marketing Agility to Provide Superior Customer Experience

Updating and redesigning digital properties quickly and with customer expectations in mind is a huge and never-ending challenge for marketers, not to mention a huge investment of time and resources across development, design, and marketing departments.

Updates and redesigns within a traditional Drupal architecture typically take quite some time because both the back end and the front end must be modified, meaning that you the marketer are relying on both developers and designers to complete the project. CX is evolving so fast that by the time you wrangle your development team, bring in design, and agree on the way forward you may find that your proposed changes already look dated!

Decoupling your Drupal CMS allows you to make upgrades to the back end without impacting UX on the frontend. And in turn, you can make design and UX changes to the front end independently from the back end.

In a decoupled architecture you keep the CMS as a long-term product on the back end, but can make important front end UX changes that impact customer acquisition and retention more frequently and more cheaply.

Decoupled Drupal is not for everyone. If you only need to manage content for your company’s website and do not maintain multiple digital properties, a more traditional Drupal CMS architecture probably makes more sense for you. It’s time to consider decoupled Drupal if you are a large organization with several uses for the same content, such as multiple sites displaying the same content or a stable of various front-end devices.

If you would like to further discuss if decoupled Drupal is right for your business, reach out to us here. And check out this whitepaper for a deeper dive into all of the Drupal 8 architecture options.

Jan 18 2018
Jan 18

In my last post,  we discussed why marketers might want to migrate their content to Drupal 8, and the strategy and planning required to get started. The spreadsheet we shared with you in that post is the foundation of a good migration, and it usually takes a couple sprints of research, discussion, and documentation to compile.  It’s also a process that’s applicable to all migration of content, no matter the source or destination framework.

In this post, we will talk about what’s required from your internal teams to actually pull off a content migration to Drupal 8. In later posts, we’ll cover the actual technical details of making the migration happen.

Migration: A Definition

It’s probably worth taking some time here to clarify what, exactly, we’re talking about when we say ‘migration’. In this context, a migration is a (usually automated) transferring of existing content from an old web site to a new one. This also usually implies a systems upgrade, from an outdated version of your content management system to a current version.  In these exercises, we’re assuming that you’re moving from Drupal 6 or 7 to Drupal 8.

What kind of team is required?

There are several phases of migration, each of which requires a different skill set.  The first step is outlined in detail in my last post. The analysis done here is a joint effort, generally requiring input from a project manager and/or analyst, a marketing manager, and a developer.  

The project manager and analyst should be well versed in information architecture and content strategy (there is some great information on this topic at usability.gov). Further, it is really helpful if they have an understanding of the capabilities of the source and target systems, as this often informs what content is transferable, and how.

It’s also helpful if your team has a handle on the site’s traffic and usage. This usually falls to a senior content editor or marketing manager.  Also important is that they have the ability to decide what content is worth migrating, and in what form.

In the documentation phase of migration, the developer often has limited input, as this is the least-technical phase of the whole process. However, they should definitely have some level of oversight on the decisions being made, just to ensure technical feasibility.  That requires a good thorough understanding of the capabilities of the source and target systems.

One of the parties should also have the ability to make and export content types and fields. You can see Mike Potter’s excellent Guide to Configuration Management for more information on that.

Once development on the migration begins, it mostly becomes a developer task. Migrations are a really great mentoring opportunity (We’re really big on this at Phase2).  

Finally, someone on the team also needs the ability to setup the source and target databases and files for use in all the environments (development, testing, production).

Estimation

“How long will all this take?”  We hear this a lot.  And, of course, there’s no one set answer. Migration is a complicated task with a lot of testing and a lot of patience required. It’s pretty difficult to pin down, but here are some (really, really rough) guidelines for you to start from. Many of the tasks below may sound unfamiliar; they will be covered in detail in later posts.

 

Node/User/Taxonomy migrations

1-5 content types

6-10 content types

11+ content types

Initial analysis (“the spreadsheet”)

16-24 hours

32-40 hours

48-56 hours

Content type creation & export

16-40 hours

40-80 hours

8 hours/type

Configuration Grouping

16-24 hours

24-40 hours

24-40 hours

Content migrations

16-40 hours

32-56 hours

8 hours/type

Testing

24-32 hours

40-56 hours

8 hours/type

Additional Migrations

 

Files & media migration

32-56 hours

Other entity types

16-40 hour per entity type

Migrations from non-Drupal sources

16-40 hour per source type

The numbers here are in “averaged person-hours” format - this would be what it would take for a single experienced developer to accomplish these tasks. Again, remember that these are really rough numbers and your mileage will vary.

You might note, reading the numbers closely, that most of the tasks are ‘front-loaded’.  Migration is definitely a case where the heavy work happens at the start, to get things established.  Adding additional content types becomes simpler with time - fields are often reused, or at least similar enough to each other to allow for some overlap of code and configuration.

Finally, these numbers are also based on content types of "average" complexity. By this I mean, somewhere between 5 and 15 un-customized content fields.  Content types with substantially more fields, or with fields that require a lot of handling on the data, will expand the complexity of the migration.  More complexity means more time.  This is an area where it's hard to provide any specific numbers even as a guideline, but your migration planning spreadsheet will likely give you an idea of how much extra work is necessary.  Use your best judgement and don't be afraid to give yourself some wiggle room in the overall total to cover these special cases.

Security and Safety Considerations

As with all web development, a key consideration in migrating content is security. The good news is that migration is usually a one-time occurence.  Once it’s done, all the modules and custom code you’ve written are disabled, so they don’t typically present any security holes. As long as your development and database servers are set up to industry standard, migration doesn’t present any additional challenges in and of itself.

That said, it’s important to remember that you are likely to be working with extremely sensitive data - user data almost always contains PII (Personally Identifiable Information). It is therefore important to make sure that user data - in the form of database dumps, xml files, or other stores - does not get passed around in emails or other unsecure formats.

Depending on your business, you may also have the same concerns with actual content, or with image and video files. Be sensible, take proper precautions.  And make sure that your git repository is not public.

I also strongly recommend sanitizing user accounts and email addresses on your development databases.  There’s no feeling quite like accidentally sending a few thousand dummy emails to your unsuspecting and confused customers.  Use drush sql-sanitize and avoid any possibly embarrassing and unprofessional gaffes.

What’s next?

Well, we’ve covered all the project management aspects of migration - next up is some tech talk!  Stay tuned for my next post, which will cover the foundations of developing a migration.

Jan 11 2018
Jan 11

With exponential growth in marketing tools and website builders, why are marketers still adopting Drupal and maintaining their existing Drupal systems? And how has Drupal evolved to become a crucial piece of leading brands’ martech ecosystems?

For marketing decision makers, there are many reasons to choose and stick with Drupal, including:  

  • Designed to integrate with other marketing tools

  • Increased administrative efficiencies

  • Flexible front-end design options

  • Reduced Costs

Plays Well With Others

Your customer experience  no longer just depends on your CMS. Your CMS must integrate with new technologies and channels, as well as your CRM and marketing automation tools, to perform as a cohesive digital experience platform that reaches customers where they are.

Drupal is the most flexible CMS available when it comes to third-party integrations. Along with the power of APIs, Drupal can help you outfit your digital presence with the latest emerging tech more quickly than your competitors, allowing you to deliver an unparalleled customer experience.

Check out how Workday integrated disparate systems and tools, such as Salesforce and Mulesoft, to create a seamless experience that serves both their customer community members and internal support teams.

INCREASED ADMINISTRATIVE EFFICIENCIES

In large organizations, interdepartmental collaboration obstacles often translate into inefficient content publishing practices. This is even more compounded when marketers and content editors need a developer in the loop to help them make changes. When these hurdles aren’t properly navigated, prospects and customers suffer by not being able to gain easy access to the most relevant and up-to-date product or service information.

Over the years, Drupal has evolved to be flexible and accommodating for non-technical content admins, providing a highly customizable and user-friendly administration dashboard and flexible user privileges. Drupal powers marketing teams to design content independent of developers with modules like Paragraphs, which lets content admins rearrange page layouts without code adjustments while enforcing consistency across company sites.

Flexible Front-End Design Options

Drupal 8 provides increased design flexibility by letting the front and back end architectures work as separate independent systems. Therefore the visual design of a website can be completely rebuilt without having to invest in any back-end architecture changes.

While this may seem a bit technical and in the weeds, this has significant benefits for marketing resources and budget! With this design flexibility, marketers can implement new designs faster and more frequently, empowering your team to test and iterate on UX to optimize the customer experience.

REDUCED COSTS

The number of marketing tools required to run a comprehensive omnichannel marketing strategy is only growing. We add tools to our martech stack to help us grow our reach, understand our customers better, and personalize customer engagement. Each one of these tools has its own associated package cost or service agreement.

As an open source platform Drupal does not incur any licensing costs. While in contrast, a large implementation can easily cost hundreds of thousands of dollars just to have the right to use proprietary software, Drupal’s community-developed software is free, saving companies millions.

Drupal is also fully customizable from the get go--not only when it comes to features and what site visitors see, but also with regard to editor tools, workflows, user roles and permissions, and more. This means the money that would go towards customization projects is freed up to benefit customers.

Digital marketing managers considering Drupal, or those contemplating a migration to Drupal 8, should consider these benefits and how Drupal is helping digital marketers evolve to provide a more agile and user-friendly digital experience for prospects and customers.

Strongly considering a move to Drupal or a migration to Drupal 8? Reach out to us with any questions. And in the meantime check out Drupal 8 Content Migration: A Guide for Marketers.

Nov 20 2017
Nov 20

This year’s BADCamp DevOps summit featured a strong showing on the topic of containers. The program included many Docker-centric topics, and many sessions otherwise not container-centric. The summit showcased a lively interest in how new ideas, tools, or services relate to containers.

I strongly agreed with the keynote by Michelle Krejci arguing for containers as the next natural step in the commoditization of infrastructure. The Docker-driven Development panel in the afternoon, featured maintainers of 5 different tools aimed at facilitating local development with Docker. Naturally we represented Outrigger.

Coming out of the panel we were excited to learn the many ways in which our core technical decisions align with other Docker tools in the Drupal ecosystem, as well as the many ways Outrigger’s particular developer experience and learning focus marks it as a little different.

Thanks to Alec for organizing and Rob for moderating.

Here is a recap of the Outrigger answers to various questions put to the panel.

How did your project get started? What need did it initially cover?

Outrigger got started in mid-2014 as set of BASH scripts to facilitate setting up local, Docker-based environments for developers that didn’t know about Docker or containers, but expected their Drupal content to persist across work sessions and nice, project-contextual URLs (instead of “localhost”).

We wanted a tool to allow our teams to easily jump between projects without running a bunch of heavy VMs or needing to juggle environment requirements.

It has since evolved into a Golang-based Docker management and containerized task-running tool, with a library of Docker images, a set of Dockerization conventions shipped for Drupal by a code generator, and of course a website, all spanning 20+ Github repositories.

How do you deal with the complexity of Docker?  Do you expose how containers are configured and operate or do you do something to ease the learning curve?

Outrigger brokers how Docker is managed on OSX, Linux, and Windows. We work really hard to minimize the time for a developer to onboard to a project, and the ease in running the most common operations any project might need to run without regard for the technologies involved.

That gives us the breathing space to directly leverage fairly standard Docker configuration, especially configurations for docker-compose. This allows us to include that configuration as part of the project code repository. We want to make it easy for someone to look at and understand what is going on under the covers so that they can learn more when they are ready

Common operations, presented as “project scripts”, are configured in an outrigger.yml file at the project root and are easily inspected. They are chains of BASH commands, usually using docker or docker-compose to execute Drush, Drupal Console, BLT, Composer, Grunt, Gulp, npm, yarn, webpack, and all the other tools inside containers.

Outrigger’s emphasis is on developer experience conventions and utilities to promote project team consistency first, with Docker hosting & management being a secondary concern.

Could you scale a local environment built on your project to a production use case? If so, how?

Outrigger is not just a tool, it’s also a set of conventions and a library of Docker images. We support local development and continuous integration primarily, but leveraging a project based on Outrigger in production would simply need to publish an application-specific image.

We’ve used Docker in production this way, and also in hybrid approaches such as using the Docker infrastructure as part of a release system shipping to otherwise traditional servers.

Our current research is how to more naturally support Kubernetes for all non-local environments

How are you solving the slow filesystem issue (Docker 4 Mac specific)? Do you see your approach changing in the future?

We use docker-machine instead of Docker4Mac primarily because Docker4Mac performance has traditionally been very poor and their networking and routing support is similarly bad.

We initially took the NFS route with docker-machine for shared files and still found that didn’t meet reasonable performance targets for our typical builds. NFS can be really bad when you have lots of really small files. In some cases, we had builds take 20 minutes instead of 4 minutes, which can be really bad.

We’ve since switched to a Unison-based approach to get the best of both worlds in terms of local IDE performance and container performance. In our measuring, it’s as fast as the virtual machine can be and we’ve seen close-enough to native performance that it’s a non-issue for now. Our Unison-based approach also has the benefit of supporting filesystem event notifications, making watches that run inside containers a reality.  It even has a similar level of support overhead to NFS in terms of helping less ops-centric developers continue to work smoothly.  We still use the ease of NFS for operations that don’t require high performance or in-container filesystem event notification.

If Docker4Mac addressed all our performance and development experience concerns we would probably switch to extending that as a common core product. However, beyond file system performance it doesn’t seem like they have some of the other network routing and DNS issues that Outrigger is focused on solving.

Are there any other platform-specific pain points you’ve seen?

Finding someone willing to test on Windows and help us find Windows-savvy solutions to DNS routing has been a challenge. We’re mostly a macOS and Linux shop.

How would you handle integration with arbitrary third party tools that are not built into your project yet? E.g., Someone wants to use Elastic Search or some crazy Node frontend. How would you wire that in?

We support anything that can run in a Docker container. This can be entirely driven from an individual application as:

  1. Find or build a Docker image, preferably with s6 for init scripts and signal handling, and confd support for commonly configured options.

  2. Wire that into the project’s docker-compose configuration with

    • Volumes (or bind mounts that assume a /data directory) to persist data

    • Environment config file overrides added via custom Dockerfile or bind mounts in the project’s ./env directory.

    • Labels for operating services that should be web accessible so DNSDock can provide DNS resolution for friendly URLs

Outrigger is very open on matters of image structure, most of the details are usage conventions or opting-in to functionality.

What is your project’s current relationship with Drupal?  Would you say you’re “Married”, “It’s complicated”, or “Just friends”?

Our most commonly used Docker images are fine-tuned to support Drupal or tools common to the Drupal ecosystem. Our most sophisticated project conventions are worked out with Drupal, in the form of our Yeoman-based Outrigger Drupal project generator.

We deliberately wanted Outrigger to be flexible enough to facilitate good developer experiences regardless of technology, so I would say “Just Friends”.

What’s the biggest missing piece (or potential opportunity) for local development stacks these days?

Continuity with production is the holy grail of local development, and it’s very close.

Supporting execution of tasks on the local environment that may not always need to be in containers. Or if they are in containers, may be complex enough that asking developers to memorize docker-compose commands is still complicated. To that end we’ve created a task runner in the Outrigger CLI meant as a companion to docker-compose and any tools run inside the containers.

If a genie came out of a bottle and granted you one wish for the next Docker release OTHER THAN a faster filesystem on OSX, what would you wish for?

I think the greatest thing Docker4Mac could do is expose network routing to the underlying docker bridge network.  Allowing for direct routing to containers within the macOS hypervisor would remove the need for all containers to share the localhost port space.  This would facilitate launching, for example, multiple database containers and connecting to each of them on the same port but at different domain names.  The networking limitations of Docker4Mac are a big obstacle to allowing for an enhanced developer experience, and power user capabilities.

Docker natively supporting Kubernetes instead of just Docker Swarm (It’s happening!)

Click here to learn more about why you should add Outrigger to your development toolkit, and be on the lookout for our upcoming blog on Outrigger version 2.0.0.

Nov 09 2017
Nov 09

A growing number of healthcare organizations have chosen to build their digital presence with Drupal, including Memorial Sloan Kettering, and New York State’s largest healthcare provider, Northwell Health. And healthcare continues to adopt Drupal as the content management system of choice, with many healthcare systems embracing the benefits of migrating from older Drupal platforms to Drupal 8.  

What is it about Drupal that keeps leading healthcare institutions committed to the platform? And how has Drupal evolved to help healthcare organizations better serve their patients and create a secure, user-friendly digital experience?

For digital healthcare decision makers, there are many reasons to choose and stick with Drupal, including:  

  • Best-in-class security

  • Centralized multi-site management

  • Built for third-party integrations

  • Increased administrative efficiencies  and consistent UX

  • Improved accessibility

Let’s look at how these Drupal capabilities are helping digital healthcare evolve today.

Best-In-Class Security

While healthcare organizations may have baulked at using open source solutions initially due to security and patient privacy concerns, adoption of Drupal by leading medical facilities like the Children's Hospital Philadelphia and Duke Health have extinguished myths around open source security. 

Drupal’s collaborative, open source development model gives it an edge when it comes to security. Throngs of Drupal developers around the globe ensure a constant process of testing, reviews, and alerts which ensure detection and eradication of potential security vulnerabilities.

Since thousands of developers dedicate their time and talents to finding and fixing security issues, Drupal can respond very quickly when problems are found. With Drupal 8, there are even more ways the Drupal community has taken action to make this software secure and evolve to respond to new types of attacks.

Centralized Multi-Site Management

Health systems often encompass many healthcare brands, each of which requires its own digital presence, content, and site functionality. Creating and managing a centralized, consistent experience for patients across health providers and devices can be tricky.

A multisite platform built with Drupal enables healthcare systems to run all of their sites off of a single codebase, providing better consistency, streamlined maintenance, and facilitating easier content sharing between sites, while empowering healthcare facilities with flexible functionality for their specific needs. Editors from one centralized office can easily publish and push content to multiple sites.

Editors can also quickly create new microsites without seeking developer assistance. This gives them greater agility in posting timely, relevant content for their patients across many different digital spaces.

Built for Third-party Integrations

Healthcare tools, tech, and software are experiencing explosive growth as new patient communication channels like chatbots, voice technology, and AI emerge. The digital patient experience no longer just lives on your CMS.

Your CMS must integrate with new technologies and channels, as well as your CRM and marketing automation tools, to perform as a cohesive digital experience platform that reaches patients where they are.

Drupal is the most flexible CMS available when it comes to third-party integrations. Along with the power of APIs, Drupal can help you outfit your digital presence with the latest emerging tech more quickly than your competitors, allowing you to deliver an unparalleled patient experience.

Increased Administrative Efficiencies and Consistent User Experience

In large, consolidated medical ecosystems, interoffice and interdepartmental collaboration obstacles often translate into inefficient content publishing practices. This is even more compounded when content editors need a developer in the loop to help them make changes. When these hurdles aren’t properly navigated, patients suffer by not being able to gain easy access to the most relevant and up-to-date information.

Over the years, Drupal has evolved to be more flexible and accommodating for non-technical content admins, providing a highly customizable and user-friendly administration dashboard. Drupal empowers healthcare content admins to design content independent of developers with modules like Paragraphs, which lets content admins rearrange page layouts without code adjustments while enforcing consistency across agency sites.

Check out how Phase2 helped Memorial Sloan Kettering create a consistent user experience for over 130,000 patients with Drupal 8.

Improved Web Accessibility

To effectively serve their patients, healthcare websites must be accessible to an extremely large and diverse audience. This audience often requires accommodations for physical disabilities, and needs to be able to access information across an array of devices, and in multiple languages. With its diverse, worldwide community of contributors, Drupal facilitates meeting accessibility needs on a number of fronts.

Flexible and fully customizable theming makes it possible for Drupal sites to meet Section 508 and WCAG accessibility requirements. Responsive base themes are readily available to give themers a strong foundation for ensuring compatibility with a wide range of access devices. And internationalization is at the cornerstone of Drupal 8 to provide multilingual functionality.

These accessibility features are helping healthcare systems create a user-friendly experience for everyone, and ultimately pushing digital healthcare to follow user-centric design best practices.

Healthcare professionals considering Drupal, or a migration to Drupal 8, should consider these benefits and how leveraging Drupal for healthcare systems can evolve digital experiences to be seamless, intuitive, and accessible for both the patient and internal marketing and IT teams.

To learn more about evolving the digital patient experience with Drupal, listen to this podcast with Phase2 and our client, Northwell Health.

Nov 07 2017
Nov 07

If you’re a marketer considering a move from Drupal 7 to Drupal 8, it’s important to understand the implications of content migration. You’ve worked hard to create a stable of content that speaks to your audience and achieves business goals, and it’s crucial that the migration of all this content does not disrupt your site’s user experience or alienate your visitors.  

Content migrations are, in all honesty, fickle, challenging, and labor-intensive. The code that’s produced for migration is used once and discarded; the documentation to support them is generally never seen again after they’re done. So what’s the value in doing it at all?

Your data is important (Especially for SEO!) 

No matter what platform you’re working to migrate, your data is important. You’ve invested lots of time, money, and effort into producing content that speaks to your organization’s business needs.

Migrating your content smoothly and efficiently is crucial for your site’s SEO ranking. If you fail to migrate highly trafficked content or to ensure that existing links direct readers to your content’s new home you will see visitor numbers plummet. Once you fall behind in SEO, it’s difficult to climb back up to a top spot, so taking content migration seriously from the get go is vital for your business’ visibility.

Also, if you work in healthcare or government, some or all of your content may be legally mandated to be both publically available, and letter-for-letter accurate. You may also have to go through lengthy (read: expensive) legal reviews for every word of content on your sites to ensure compliance with an assortment of legal standards – HIPAA, Section 508 and WCAG accessibility, copyright and patent review, and more.  

Some industries also mandate access to content and services for people with Limited English Proficiency, which usually involves an additional level of editorial content review (See https://www.lep.gov/ for resources).  

At media organizations, it’s pretty simple – their content is their business!

In short, your content is a business investment – one that should be leveraged.

So Where do I start with a Drupal 8 migration?

Like with anything, you start at the beginning. In this case that’s choosing the right digital technology partner to help you with your migration. Here’s a handy guide to help you choose the right vendor and start your relationship off on the right foot.

Once you choose your digital partner content migration should start at the very beginning of the engagement. Content migration is one of the building blocks of a good platform transition. It’s not something that can be left for later – trust us on this one. It’s complicated, takes a lot of developer hours, and typically affects your both content strategy and your design.

Done properly, the planning stages begin in the discovery phase of the project with your technology vendor, and work on migration usually continues well into the development phase, with an additional last-sprint push to get all the latest content moved over.

While there are lots of factors to consider, they boil down to two questions: What content are we migrating, and how are we doing it?

Which Content to Migrate

You may want to transition all of your content, but this is an area that does bear some discussion. We usually recommend a thorough content audit before embarking on any migration adventure. You can learn more about website content audits here. Since most migration happens at a code & database level, it’s possible to filter by virtually any facet of the content you like. The most common in our experience are date of creation, type of content, and categorization.

While it might be tempting to cut off your site’s content to the most recent few articles, Chris Anderson’s 2004 Wired article, “The Long Tail” (https://www.wired.com/2004/10/tail/) observes that a number of business models make good use of old, infrequently used content. The value of the Long Tail to your business is most certainly something that’s worth considering.

Obviously, the type of content to be migrated is pretty important as well. Most content management systems differentiate between different ‘content types’, each with their own uses and value.  A good thorough analysis of the content model, and the uses to which each of these types has been and will be used, is invaluable here.  There are actually two reasons for that.  First, the analysis can be used to determine what content will be migrated, and how.  Later, this analysis serves as the basis of the creation of those ‘content types’ in the destination site.

A typical analysis takes place in a spreadsheet (yay, spreadsheets!). Our planning sheet has multiple tabs but the critical one in the early stages is Content Types.

 

content types planning sheet

Here you see some key fields: Count, Migration, and Field Mapping Status.

Count is the number of items of each content type. This is often used to determine if it’s more trouble than it’s worth to do an automated content migration, as opposed to a simple cut & paste job. As a very general guideline, if there are more than 50 items of content in a content type, then that content should probably be migrated with automation. Of course, the amount of fields in a content type can sway that as well. Once this determination is made, that info is stored in the Migration field.

The Field Mapping Status Column is a status column for the use of developers, and reflects the current efforts to create the new content types, with all their fields.  It’s a summary of the Content Type Specific tabs in the spreadsheet. More detail on this is below.

Ultimately, the question of what content to migrate is a business question that should be answered in close consultation with your stakeholders.  Like all such conversations, this will be most productive if your decisions are made based on hard data.

How do we do it?

This is, of course, an enormous question. Once you’ve decided what content you are going to migrate, you begin by taking stock of the content types you are dealing with. That’s where the next tabs in the spreadsheet come in.

The first one you should tackle is the Global Field Mappings. Most content management systems define a set of default fields that are attached to all content types. In Drupal, for example, this includes title, created, updated, status, and body. Rather than waste effort documenting these on every content type, document them once and, through the magic of spreadsheet functions, print them out on the Content Type tabs.

 

global field mappings

Generally, you want to note Name, Machine Name, Field Type, and any additional Requirements or Notes on implementation on these spreadsheets.

It’s worth noting here that there are decisions to be made about what fields to migrate, just as you made decisions about what content types.  Some data will simply be irrelevant or redundant in the new system, and may safely be ignored.

 

migration planning sheet

In addition to content types, you also want to document any supporting data – most likely users and any categorization or taxonomy. For a smooth migration, you usually want to actually start the development with them.

The last step we’ll cover in this post is content type creation. Having analyzed the structure of the data in the old system, it’s time to begin to recreate that structure in the new platform. For Drupal, this means creating new content type bundles, and making choices about the field types. New platforms, or new versions of platforms, often bring changes to field types, and some content will have to be adapted into new containers along the way.  We’ll cover all that in a later post.

Now, many systems have the ability to migrate content types, in addition to content. Personally, I recommend against using this capability. Unless your content model is extremely simple, the changes to a content type’s fields are usually pretty significant. You’re better off putting in some labor up front than trying to clean up a computer’s mess later.

In our next post, we’ll address the foundations of Drupal content migrations – Migration Groups, and Taxonomy and User Migrations. Stay tuned!

Oct 30 2017
Oct 30

When it comes to digital interactions, today’s users have become quite demanding. Whether they’re using a touchscreen or desktop, a native app or social media platform, they expect a continuous, unified experience, made up of seamless interactions - one that syncs with their offline journey as well.We call these places of interaction touchpoints, and customers reach them via channels. In the past, brick-and-mortar stores used a single channel - a physical location - as a touchpoint to interface with customers.

Today, of course, brands are accessible via multiple channels, including websites, social media, and more. This approach, while effective for reaching wider audiences, opens the door for inconsistencies across touchpoints.

Screen Shot 2016-03-01 at 1.22.32 PM

Touchpoints can easily become gaps. For instance, if product information is readily available on a store’s website but missing from its mobile app, users may become frustrated by the disparity. Users have come to expect a certain degree of contextual awareness, as well - why doesn’t the app know that I’m in the store and adjust accordingly? Why is there a gap in my experience? Unfortunately, gaps in the digital experience quickly lead to dissatisfaction with the brand as a whole and can cause consumer flight. What’s a business to do?

Enter Omni-Channel

Omni-channel is just a fancy way to describe a unified experience across multiple touchpoints, consistent on multiple devices. Martha Stewart is one of the most prominent examples of how to do omni-channel well. Whether you are shopping on marthastewart.com, visiting her store, reading her cookbooks, browsing her blog, or receiving her email newsletter, both the brand experience and the content are congruous and responsive to your device.

placeit (39)

JustFab is another brand that does this well by allowing customers to make purchases within the external apps like Instagram. Even though the transaction is happening on a completely different platform, the user experience is so coherent that the consumer doesn’t care that they are not actually making the purchase on the company’s website.

Technological advances - like the advent of Drupal 8 - make it easier for organizations to support content everywhere and anywhere. But truly implementing omni-channel requires a thoughtful digital strategy. You must consider the user journey, what digital touchpoints effortlessly enable this journey, and how these touchpoints support your own content strategy.

Omni-Channel Content Strategy: A Quick Tutorial

First, consider your own goals. What are you trying to achieve? Whether your objective is to sell a product, promote an event, or attract brand ambassadors, begin by identifying how your current content strategy supports - or detracts from - these goals.

Next, consider your audience. In interacting with you, what are their objectives? What is your unique value proposition to them, and how can you ensure they are aware of it? What do they gain from your organization in general, and each touchpoint in particular? Does the interaction engage them and pique their interest… or irritate them? Then comes the user journey, aka the series of steps a potential customer moves through as he/she encounters your brand, explores your content, and hopefully makes a purchase (or whatever your ultimate objective is). Map out the ideal user journey for an ideal hypothetical customer. Which steps happen online? Which steps happen offline? Does the entire journey happen in one digital location, like your website, or do users bounce between various platforms? How can you build in contextual awareness to continually delight your users by thinking of their needs ahead of time? It is crucial to make the transitions between these touchpoints seamless - after all, continuity of experience for your users is what makes the difference between multi-channel and omni-channel.

How Does Drupal 8 Support Omni-Channel?

Now that you have a better grasp on your omni-channel content strategy, it’s time to start making technology decisions. Drupal has always been the go-to CMS for publishing content across multiple channels, and Drupal 8 continues in this tradition with several new features baked into core.

Web Services: Integration with APIs

Drupal 8 was designed to publish and consume content through APIs as a core feature. As a result, organizations can use Drupal as a central content repository, leveraging Drupal’s rich content structuring, creation, and management features. Drupal 8 essentially acts as a centralized hub, serving content to a variety of channels.

Screen Shot 2016-03-04 at 7.25.35 AM

 

 

Personalization, Translation, and Localization

While each individual should have a consistent experience, your digital experience in general can - and indeed, should - vary from person to person. Tailor content for each and every user, depending on his/her preferences, history of interaction with the brand, location, and (obviously) language.Drupal taxonomy terms can increase engagement on your website by targeting content to users. More advanced segmentation can also be achieved by using personalization modules such as the Web Engagement Management module.In addition, multilingual capabilities are included in Drupal 8 core, making it easy to tailor users’ experience based on their locations. Several improvements in core make this possible:

  1. Native installation is available in 94 languages
  2. You can now assign language to everything
  3. Drupal 8 includes automated language and user interface translations downloads and updates
  4. Local translations are protected
  5. Field-level translation applies to all content and integrates with Views
  6. The built-in translation interface works for all configurations

Responsiveness Out-of-the-Box

This one is fairly obvious - omni-channel content should be accessible from any device. Drupal 8 was designed using a mobile-first strategy; all of the built-in starter themes are responsive. Its responsive design targets the viewport size and either scales the content and layout for the available real estate, or uses a new or modified layout, defined as a breakpoint. Drupal core comes with two modules that enable responsive behavior: Breakpoint and Responsive Image. This means that the ability to display your content appropriately on a variety of devices isn’t something you have to bolt on after the fact; it’s a core part of the Drupal framework.

placeit (38)

Summing It Up

We all interact with content in a variety of formats and mediums. Successful organizations have a strategy in place to take advantage of those channels and touchpoints best suited to their business goals, customers’ preferences, and technology capabilities. Drupal 8 is the latest and greatest means to this end!

Want to learn more about planning your digital roadmap to be omnichannel ready? Read more!

Oct 12 2017
Oct 12

Are you a digital marketer considering Drupal 8? This blog is for you.

As marketers we have a front row seat to the rapid evolution of the digital landscape and changing customer expectations. We’re living in a post-browser world, and we’re swiftly moving into the age of full blown digital experience ecosystems.

Customer expectations include a seamless experience and interaction with a wide range of touchpoints including mobile, wearables, IOT, and chatbots. An investment in a flexible system like Drupal 8 will help you deliver a customer experience that will set you apart from your competitors.

With the swift digital evolution and changing customer expectations come some compelling reasons for marketers to champion the investment in Drupal 8.

A few examples include an API-friendly design that lets you integrate with your existing Sass tools, and increased content publishing efficiencies that allow marketers to quickly update and organize content across a growing number of touchpoints without developer assistance.

screenshot of Drupal 8 webinar

Watch this webinar to learn more about the marketing benefits of migrating to Drupal 8, and how to navigate the Drupal 8 decision.

You’ll learn:

  • Why organizations are moving to Drupal 8

  • How Drupal 8 can support your marketing and customer engagement strategies

  • When it's the right time and circumstance to make the move

  • What to consider for a successful migration plan

Watch the webinar here.

Oct 04 2017
Oct 04

So often in the enterprise software market, we see that one of the most pressing challenges companies face is scaling their online community support operations to match their explosive user growth.

A common factor in scaling success is always an emphasis on optimizing the user experience, design, and underlying technical architecture so that the entire digital support ecosystem is intuitive, fast, and consistent across touchpoints. And we are not talking about your traditional intranet portal. We’re talking about a digital support experience that is accessed from any device and provides a seamless brand experience across online and offline touchpoints.

A perfect example of this kind of community support  success can be found in the updated platform of our client, Workday. The Workday customer and partner community has grown to over 70,000 members in past two years - a 60% increase. In light of their rapid growth and evolving business needs, Workday worked with Phase2 to create an engaging community platform that supports and educates their customer community.

The updated platform built on Drupal 8, features topic forums,  product release information, quickly discover valuable resources, direct customer support access, and a custom “Brainstorm” feature so the customer community is empowered to influence Workday’s product roadmap.

Workday is an example of an ambitious organization that is leveraging open source to build powerful solutions with no licensing fees, the power of the Drupal development community, and a secure yet flexible core software that will scale with their business needs.

Learn more about the Workday project here.

Sep 07 2017
Sep 07

In 2012 we wrote a blog post about why many of the biggest government websites were turning to Drupal. The fact is, an overwhelming number of government organizations from state and local branches to federal agencies have chosen to build their digital presence with Drupal, and government continues to adopt Drupal as the content management system of choice. What is it about Drupal that keeps government committed to the platform? And how has Drupal evolved to help government agencies better serve their constituents?

 For digital government decision makers, there are many reasons to choose and stick with Drupal, including:  

  • Increased content publishing efficiencies

  • Flexible and consistent UX

  • Centralized management of many sites

  • Improved accessibility

  • No licensing fees, lower operational maintenance costs

  • Best-in-class security

Let’s look at how these Drupal capabilities are helping government agencies evolve today.

Increased Administrative Efficiencies and Consistent User Experience

In large organizations, interdepartmental collaboration obstacles often translate into inefficient content publishing practices. This is even more compounded when content editors need a developer in the loop to help them make changes. When these hurdles aren’t properly navigated, citizens suffer by not being able to gain easy access to the most relevant and up-to-date information.

Over the years, Drupal has evolved to be more flexible and accommodating for non-technical content admins, providing a highly customizable and user-friendly administration dashboard. Drupal empowers government content admins to design content independent of developers with modules like Paragraphs, which lets content admins rearrange page layouts without code adjustments while enforcing consistency across agency sites.

Check out how the Department Of Energy’s user-centric site design leads the charge in government and competes with the private  sector’s  digital experience.  

Improved Accessibility

To effectively serve the public, government websites must be accessible to an extremely large and diverse audience. At times, this audience may require accommodations for physical disabilities, an array of devices, and multiple languages. With its diverse, worldwide community of contributors, Drupal facilitates meeting accessibility needs on a number of fronts.

Flexible and fully customizable theming makes it possible for Drupal sites to meet Section 508 and WCAG accessibility requirements. Responsive base themes are readily available to give themers a strong foundation for ensuring compatibility with a wide range of access devices. And internationalization is at the cornerstone of Drupal 8 to provide multilingual functionality.

These accessibility features are helping government agencies create a user friendly experience for everyone, and ultimately pushing digital government to follow user-centric design best practices.

Centralized Management of Many Sites

Government agencies are comprised of many offices, each of which requires its own digital presence, content, and architecture. Creating and managing a centralized, consistent experience for constituents across offices and devices can be tricky.

Drupal allows government to develop a platform that runs all sites off of a single codebase, providing better consistency, streamlined maintenance, and facilitating easier content sharing between sites. Editors from one centralized government office can easily publish and push content to multiple sites.

Editors can also quickly create new microsites without seeking developer assistance. This gives them greater agility in posting timely, relevant content for their visitors across many different digital spaces.

Check out how Phase2 helped the State of Georgia move 55 websites from a proprietary system hosted at the state’s data center to a Drupal platform hosted in the cloud.

Reduced Costs

In order to truly evolve, government agencies need to allocate funds to the projects and teams that benefit their constituents, not to hosting services and site customization.

As an open source platform Drupal does not incur any licensing costs. While a large implementation can easily cost hundreds of thousands of dollars just to have the right to use proprietary software, Drupal’s community-developed software is free, saving government millions.

Drupal is also fully customizable from the get go--not only when it comes to features and what site visitors see, but also with regard to editor tools, workflows, user roles and permissions, and more. This means the money that would go towards customization projects is freed up for more appropriate use.

Drupal’s cost saving features enabled the State of Georgia to reduce platform operational costs by  65%.

 Best-In-Class Security

While government agencies have historically been wary of using open source software, with the adoption of Drupal by leading federal agencies like the White House, Department of Energy, and U.S Patent and Trade Office, most of the security myths around open source software were extinguished.

Drupal’s collaborative, open source development model gives it an edge when it comes to security. Throngs of Drupal developers around the globe ensure a constant process of testing, reviews, and alerts which ensure detection and eradication of potential security vulnerabilities. Since thousands of developers dedicate their time and talents to finding and fixing security issues, Drupal can respond very quickly when problems are found. With Drupal 8, there are even more ways the Drupal community has taken action to make this software secure and evolve to respond to new types of attacks.

Government managers considering Drupal, or government users contemplating a migration to Drupal 8, should consider these benefits and how Drupal is helping digital government evolve to be a more efficient, user-friendly, and accessible environment for constituents. For more information on how to use Drupal to increase efficiency and lower costs in government agencies, take a look at the work Phase2 has done with leading government agencies.

Jul 26 2017
Jul 26

Introduction

One of the greatest improvements added in Drupal 8 was the Configuration Management (CM) system. Deploying a site from one environment to another involves somehow merging the user-generated content on the Production site with the developer-generated configuration from the Dev site. In the past, configuration was exported to code using the Features module, which I am a primary maintainer for.

Using the D8 Configuration Management system, configuration can now be exported to YAML data files using Drupal core functionality. This is even better than Features because a) YAML is a proper data format instead of the PHP code that was generated by Features, and b) D8 exports *all* of the configuration, ensuring you didn’t somehow miss something in your Features export.

“Drupal 8 sites still using Features for configuration deployment
need to switch to the simpler core workflow.”

Complex sites using Features for environment-specific configuration, or multi-site configurations should investigate the Config Split module. Sites using Features to bundle reusable functionality should consider if their solutions are truly reusable and investigate new options such as Config Actions.

Features in Drupal 8

When we ported Features to Drupal 8, we wanted to leverage the new D8 CM system, and return Features to its original use-case of packaging configuration into modules for reusable functionality. New functionality was added to Features in D8 to help suggest which configuration should be exported together based on dependencies. The idea was to stop using Features for configuration deployment and instead just use it to organize and package your configuration.

We’ve found that despite the new core configuration management system designed specifically for deployment, people are still using Features to deploy configuration. It’s time to stop, and with a few exceptions, maybe it’s time to stop using Features altogether.

Problems using Features

Here is a list of some of the problems you might run into when using Features to manage your configuration in D8:

  1. Features suggests configuration to be exported with your Content Type, but after exporting and then trying to enable your new module, you get “Unmet dependency” errors.

  2. You make changes to config, re-export your feature module, and then properly create an update-hook to “revert” the feature on your other sites, only to find you still get errors during the update process.

  3. You properly split your Field Storage config from your Field Instance so you can have multiple content types that share the same field storage, but when you revert your feature it complains that the field storage doesn’t exist yet. This is because you didn’t realize you needed to revert the field storage config *first*.

  4. You try to refactor your config into more modular pieces, but still run into what seems like circular dependency errors because you didn’t realize Features didn’t remove the old dependencies from your module.info file (nor should it).  

  5. You decide to try the core CM process using Drush config-export and config-import commands, but after reverting your features your config-export reports a lot of uuid changes. You don’t even know what uuid it’s talking about or which uuids changes you should accept.

  6. You update part of your configuration and re-export your module. When you revert your feature on your QA server, you discover that it also overwrote some other config changes that were made via the UI that somebody forgot to add to another feature.

  7. The list goes on.

Why Features is still being used

Given all of the frustrating complications with Features in D8, why do some still use it?  After all, up until a few months ago it was the default workflow even in tools such as Acquia BLT.

Most people who still use Features typically fall into two categories:

  1. “My old D7 workflow using Features still seems to mostly work, I’m used to it and just deal with the new problems, and I don’t have resources to update my build tools/process.”

  2. “I am building a complex platform/multi-site that needs different configuration for different sites or environments and having Features makes it all possible. I don’t have to worry about non-matching Site UUIDs.”

People in the first category just need to learn the new, simpler, core workflow and the proper way to manage configuration in Drupal 8. It’s not hard to learn and will save you much grief over the life of your project. It is well worth the time and resource investment.

Until recently, people in the second category had valid concerns because the core CM system does not handle multiple environments, profiles, distributions, or multi-site very well. Fortunately there are now some better solutions to those problems.

Handling environment-specific config

Rather than trying to enable different Features modules on different environments, use the relatively new Config Split module. Config Split allows you to create multiple config “sync” directories instead of just dumping all of your config into a single location. During the normal config-import process it will merge the config from these different locations based on your settings.

For example, you split your common configuration into your main “sync” directory, your production-specific config into a “prod” directory, and your local dev-specific config into a “dev” directory. In your settings.php you tell Drupal which environment to use (typically based on environment variables that tell you which site you are on).

When you use config-import within your production environment, it will merge the “prod” directory with your default config/sync folder and then import the result. When you use config-import within your local dev environment, it will merge the “dev” directory with your default config and import it. Thus, each site environment gets its correct config. When you use config-export to re-export your config, only the common config in your main config/sync folder is exported; it won’t contain the environment-specific config from your “dev” environment.

Think of this like putting all your “dev” Features into one directory, and your “prod” Features into another directory. In fact, you can even tell Config Split which modules to enable on different environments and it will handle the complexity of the core.extension config that normally determines which modules are enabled.

Acquia recently updated their build tools (BLT) to support Config Split by default and no longer needs to play its own games with deciding which modules to enable on which sites. Hopefully someday we’ll see functionality like Config Split added to the core CM system.

Installing config via Profiles

A common use-case for Features is providing common packaged functionality to a profile or multi-site platform/distribution. Features strips the unique identifiers (UUIDs) associated with the config items exported to a custom module, allowing you to install the same configuration across different sites. If you just use config-export to store your site configuration into your git repository for your profile, you won’t be able to use config-import to load that configuration into a different site because the UUIDs won’t match. Thus, exporting profile-specific configuration into Features was a common way to handle this.

Drupal 8 core still lacks a great way to install a new site using pre-existing configuration from a different site, but several solutions are available:

Core Patches

Several core issues address the need of installing Drupal from pre-existing config, but for the specific case of importing configuration from a *profile*, the patch in issue #2788777 is currently the most promising. This core patch will automatically detect a config/sync folder within your profile directory and will import that config when the profile is installed and properly set the Site UUID and Config UUIDs so the site matches what was originally exported. Essentially you have a true clone of the original site. If you don’t want to move your config/sync folder into the profile, you can also just specify its location using the “config_install” key in the profile.info.yml file.

This patch isn’t ideal for public distributions (such as Lightning) because it would make the UUIDs of the site and config the same across every site that uses the distribution. But for project-specific profiles it works well to ensure all your devs are working on the same site ID regardless of environment.

Using Drush

Another alternative is to use a recent version of “drush site-install” using its new “--config-dir=config/sync” option. This command will install your profile, then patch the site UUID and then perform a config-import from the specified folder. However, this still has a problem when using a profile that creates its own config since the config UUIDs created during the install process won’t match those in the config/sync folder. This can lead to obscure problems you might not initially detect that cause Drupal to detect entity schema changes only after cron is run.

Config Installer Project

The Config Installer project was a good initial attempt and helped make people aware of the problem and use-case. It adds a step to the normal D8 install process that allows you to upload an archived config/sync export from another site, or specify the location of the config/sync folder to import config from. This works for simple sites, but because it is a profile itself, it often has trouble installing more complex profile-based sites, such as sites created from the Lightning distribution.

Reusable config, the original Features use case

When building a distribution or complex platform profile, you often want to modularize the functionality of your distribution and allow users to decide which pieces they want to use. Thus, you want to store different bits of configuration with the modules that actually provide the different functionality. For example, placing the “blog” content type, fields, and other config within the “blog” module in the distro so it can be reused across multiple site instances. This was often accomplished by creating a “Blog Feature” and using Features to export all related “blog” configuration to your custom module.

Isn’t that what Features was designed for? To package reusable functionality? The reality is that while this was the intention, Feature modules are inherently *not* reusable. When you export the “blog” configuration to your module, all of the machine names of your fields and content types get exported. If you properly namespaced your machine names with your project prefix, your project prefix is now part of your feature.

When another project tries to reuse your “Blog Feature”, they either need to leave your project-specific machine names alone, or manually edit all the files to change them. This limits the ability to properly reuse the functionality and incrementally improve it across multiple projects.

Creating reusable functionality on real-world complex sites is a very hard problem and propagating updates without breaking or losing improvements that have been made makes it even harder. Sometimes you’ll need cross-dependencies, such as a “related-content” field that is used on both Articles and Blogs and needs to reference other Article and Blog nodes. This can seem like a circular dependency (it’s not) and requires you to split your Features into smaller components. It also makes it much more difficult to modularize into a reusable solution. How is your “related-content” functionality supposed to know what content types are on your specific site that it might need to reference?

Configuration Templates

We have recently created the Config Actions and Config Templates modules to help address this need. It allows you to replace the machine names in your config files with variables and store that as a “template”. You can then use an “action” to reference that template and supply values for the variable and import the resulting config.

In a way, this is similar to how reusable functionality is achieved in a theme using SASS instead of CSS. Instead of hardcoding your project-specific names into the CSS, you create a set of SASS files that use variables. You then create a file that provides all the project-specific variable values and then “include” the reusable SASS components. Finally, you “compile” the SASS into the actual CSS the site needs to run.

Config Actions takes your templates and actions and “runs” them by importing the resulting config into your D8 site, which you then manage using the normal Configuration Management process.  This allows you to split your configuration into reusable templates/actions and the site-specific variable values needed for your project. Config Templates actually uses the Features UI to help you export your configuration as templates and actions to make it more useable.

Stay tuned for my next blog where I will go into more detail about how to use Config Actions and Config Templates to build reusable solutions and other configuration management tricks.

Conclusion

While the Drupal 8 Configuration Management system is a great step forward, it is still a bit rough when dealing with complex real-world sites. Even though I have blogged in the past about “best practices” using a combination of Features and core CM, recent tools such as Config Split, installing config with profiles, and Config Templates and Actions all help better solve these problems. The Features module is really no longer needed and shouldn’t be used to deploy configuration. However, Features still provides a powerful UI and plugin system for managing configuration and in combination with new modules such as Config Actions it might finally achieve its dream of packaging reusable functionality.

To learn more about Advanced Configuration Management, come to my upcoming session at GovCon 2017 or DrupalCamp Costa Rica.  See you there!

Jun 29 2017
Jun 29

Drupal 8 Introduction

Drupal 8 is a very flexible framework for creating a Community support  Site.  Most of the functionality of the community site can be achieved via custom content types, views, and related entities, and by extending various core classes.  You no longer need special-purpose modules such as Blog or Forum.

This blog will introduce several useful modules in Drupal 8 that are typically used when building a community site.

 

Segmenting the community into Groups

The key module in any community site is responsible for subdividing the users and content into smaller groups or sections. Moderation of content in a group is often assigned to a specific collection of users. Users can join a group to contribute or discuss content.

Drupal 8 has two competing modules for splitting a site into Groups:  Group and Organic Groups (OG). While both also existed in Drupal 7, several architectural changes have been made in D8.

The Group Module

The Group module makes flexible use of the Drupal entity system. Each Group is an entity, which can have multiple bundle “group types”. Any other entity, such as a Node can be associated with a Group via a relationship entity, confusingly called a “group content” entity. Note that the group content entity doesn’t contain the content itself, it is merely an entity that forms the relation between the group and the actual content. The “gnode” submodule provides the group content entity for Nodes.

Each group has a set of roles and permissions that users can be assigned to. Pre-created roles include “admin”, “member”, “outsider”, and “anonymous”. For example, members of a group can be given permission to create or comment on content. If a user is not assigned as a member (outside or anonymous) they might be able to view content but not add or discuss it.

A patch (issue #2736233) adds the “ggroup” submodule which provides the group content entity for groups themselves. This allows one group to be added to another as a “subgroup”. You can currently map roles between the parent group and the subgroup, but you cannot directly inherit users from the parent group; users must also be added to the subgroup.

Because any entity can be related to a group via a custom group content entity, this module is highly flexible and customizable. Various patches and submodules are available for associating menus, domain names, notifications, taxonomy terms, and other entities to a group.

The Group module is under active development and currently has an RC1 release, with a full stable release expected shortly.

Organic Groups

The Organic Groups (OG) module was very popular in Drupal 7 and even used as the basis for the Open Atrium distribution. Many people were not aware that OG was being ported to Drupal 8 because the development was done in Github, away from the normal drupal.org project page. OG is also under active development, but no releases have been made; just a develop branch is available.

In OG, any Node can be marked as a Group. So a group is still an “entity”, but it is specifically a node entity. OG also has roles and permissions, such as “member”. However, when a user is added to a group, a og_membership relationship entity is created. Node content is placed into a group via a simple entity-reference field on the node, pointing to the group node that it belongs to.

A github issue is available to allow a Group to belong to another group (subgroups), but it is more complex than simply adding an entity-reference field from one Group node to another. No concept of role mapping or user inheritance is available, nor is it planned.

While OG was used extensively in Drupal 7, its lack of releases and its more complex architecture has led many sites to start using the Group module instead, which has a more consistent framework for developers who need to integrate other entity types.

 

Subscriptions and Notifications in Drupal 8

To engage users on a community site, it must be easy to subscribe to interesting groups, content, and other users. The Message API module stack has been ported to Drupal 8 and provides the underlying message framework and email notification systems needed to tell users when their subscribed content has been added or updated. Phase2 has heavily contributed to the development and enhancement of the Message stack in Drupal 8, including the  Message Digest module that allows email notifications from Message Subscribe to be grouped into daily or weekly digests.

Flags

The message subscription feature uses the Flag module, which is also a key module on most community sites. In Drupal 8, a Flag is a fieldable entity. The ability to add custom fields to a flag makes them even more useful than in the past. For example, the Message Digest module can store the frequency of digest (daily, weekly, etc) as part of the subscription flag itself.

Rating Content

Community sites often contain a huge amount of content, and tools to help users sift through this content are needed. One way to find the most useful content is to allow users to “rank” content, either with a numeric rating system, or just a simple like/dislike. While a Flag could be used to mark a node as “liked”, the Rate module provides a more flexible mechanism, allowing you to choose between different rating systems, all using the Vote API module.

Since Comments are also entities in Drupal 8, you can rate comments, list them in ranked order of relevance, and even add a Flag for marking a specific comment as “the right answer”.

Rewarding Users

To reward your most active users, a “point system” is often used. Users who post correct answers, or have the highest rated content earn more points. Adding this and other gamification features to your site can encourage the growth of a vibrant and active community.

You can add a custom Point field to the User entity in Drupal 8 and write some rules for adding (or subtracting) points based on various conditions. With Drupal 8's flexible framework, you can easily integrate third party gamification features into your platform as well.

 

Community Content

Community sites live and die by having useful, engaging, and relevant content. Sites will often have several different types of content, such as articles, blogs, events, documents, etc. If you try to use a module dedicated to a specific type of content, you’ll often need to override or customize behavior. In most cases it is more effective to simply create your own content type, add some custom fields, and create some views.

Take advantage of the object-oriented nature of Drupal 8. If you need a new field that has slightly custom behavior, simply extend a base field class that is close to what you need and modify it. Don’t be afraid of some custom development to achieve the specific needs of your site. Since you can often just inherit base classes from core, it’s easier to write simple and secure extensions.

 

Conclusion

Every community site is different. How you best engage your community is greatly dependent on the content that is created, and by providing good tools to create and manage that content. But each community has unique motivators. Rather than using a one-size-fits-all product for your community, analyze and understand your specific requirements and prioritize those that best engage your users.

Most of the modules described here are under active development. Many do not have full stable releases yet in Drupal 8, but this is improving rapidly. If you see functionality close to your needs, get involved and help contribute to a module to make it better.

In Drupal 8 it’s even easier than ever to use the enhanced core functionality along with some basic modules to achieve your needs.

Interested in learning how to speed up your support team with community content management best practices? Check out this blog!

May 16 2017
May 16

Columbia University is taking proactive steps to ensure its predominantly Drupal-based digital properties are offering the best possible experience to site visitors. Using Acquia’s Lightning distribution as a base, the CUIT team has begun to roll out a new platform on Drupal 8.

May 16 2017
May 16

Goal: Getting PHP Sessions into Redis

One of several performance-related goals for NBA.com was to get the production database to a read-only state. This included moving cache, the Dependency Injection container, and the key-value database table to Redis.  99% of all sessions were for logged-in users, which use a separate, internal instance of Drupal; but there are edge cases where anonymous users can still trigger a PHP session that gets saved to the database.

May 16 2017
May 16

DrupalCon 2017 may be over, but we’re still feeling the impact. Last week 20+ Phase2 team members and over 3,000 sponsors, attendees, and speakers converged on Baltimore for 5 days of Drupal.

In case you weren’t able to join us in person, here is a recap:

 

Impact At The Phase2 Booth

May 16 2017
May 16

The original purpose of the Features module was to “bundle reusable functionality”. The classic example was a “Photo Gallery” feature that could be created once and then used on multiple sites.

In Drupal 7, Features was also burdened with managing and deploying site configuration. This burden was removed in Drupal 8 when configuration management became part of Core, allowing Features to return to its original purpose.

May 16 2017
May 16

Building an online community is not a new topic, but the market is refocused on its growing importance because these online communities can increase customer retention, decrease customer support expenses, and increase profits.

May 16 2017
May 16

Pattern Lab is many wonderful things: a style guide, a component inventory, a prototyping system, and the embodiment of a design philosophy, all wrapped inside a fundamentally simple tool – a static site generator. It has greatly improved Phase2’s approach to how we build, theme, and design websites. Let’s talk about what Pattern Lab is, how we use it in our process by integrating it into the theme of a CMS like Drupal or WordPress, and the resulting change in our development workflow from linear to parallel.

Note: We’ll be discussing this topic in our webinar on June 16th. Register here!

What is Pattern Lab?

Pattern Lab allows us to easily create modular pieces of HTML for styling & scripting. We call these modular pieces of HTML components – you may have already heard of the iconic atomsmolecules, and organisms. Pattern Lab provides an easy-to-use interface to navigate around this component inventory.

Pattern Lab also does much more: it fills the role of a style guide by showing us colors, fonts, and font sizes selected by the design process and demonstrates common UI elements like buttons, forms, and icons along with the code needed to use them. That part is important: it’s the distinction between “this is what we’re going to build” and “this is what has been built and here’s how to use it.” Pattern Lab provides a handbook to guide the rest of your complex CMS.

Pattern Lab menu

We can also prototype with Pattern Lab because it supports “partials.” Partials allow our components to contain other components, giving us modular components. This lets us reuse our work in different contexts by not repeating ourselves, ensuring consistency of our design across a wide set of pages and viewports, and reducing the number of bugs and visual inconsistencies experienced when each page contains unique design elements. It supports either low fidelity “gray-boxing” or high fidelity “it looks like the finished site” prototyping. You can see an example of this by looking at the “Templates” and “Pages” in Pattern Lab below.

Templates and pages

To summarize, Pattern Lab is a style guide, a component inventory, and a prototyping environment where we can see our designs in the medium they are destined for: the browser! Now, let’s talk about the old way of doing things before we discuss how we implement this tool, and the difference the new way makes.

The Old Way

Old design workflow

Generally speaking (and greatly simplifying), the old process involved several linear steps that effectively block subsequent steps: the goal of each step is to create a deliverable that is a required resource for the next step to start. The main point I want to make is that in order for the front-end developers to implement the designs they need HTML, so they have to wait for the back-end developer to implement the functionality that creates the HTML.

Front-end developers just need HTML. We don’t need the HTML required for the complex logic of a CMS in order to style it. We just need HTML to style and script so we can create our deliverables: the CSS & JavaScript.

To reiterate this point: front-end devs just need HTML, wherever it comes from.

Now that we’ve set the stage and shown the problem, let’s take a look at the way we implement Pattern Lab and how that helps improve this process.

Integrating Pattern Lab and the CMS Theme

Instead of keeping our Pattern Lab site (which contains our prototype and style guide) separate from the CMS, we keep them together. Pattern Lab is just a simple static site generator that takes HTML shorthand and turns it into HTML longhand. We just put the Pattern Lab folder inside the theme right here (for a Drupal site): /sites/all/themes/theme-name/pattern-lab/. Now it’s next to other fundamental assets like CSS, JavaScript, Images, and Fonts. Sharing these assets between Pattern Lab and our CMS is a huge step forward in improving our process.

Folder Structure

theme-name/ css/ style.css js/ script.js pattern-lab/ source/ # (HTML Shorthand) public/ # (HTML Longhand - a.k.a The Pattern Lab Site) templates/ *.tpl.php # (All our CMS template files)

1

2

3

4

5

6

7

8

9

10

theme-name/

css/

style.css

js/

script.js

pattern-lab/

source/ # (HTML Shorthand)

public/ # (HTML Longhand - a.k.a The Pattern Lab Site)

templates/

*.tpl.php # (All our CMS template files)

Sharing CSS & JS Assets

With Pattern Lab inside our CMS theme folder, all we really need to do to “integrate” these two is include this HTML tag in Pattern Lab to use the CSS that our CMS theme is using:

<link href="https://www.phase2technology.com/pattern-lab-taking-our-workflow-from-a-..." rel="stylesheet" media="all">

1

<link href="../../../../css/style.css" rel="stylesheet" media="all">

And then include this HTML tag to use the CMS theme’s JavaScript:

<script src="https://www.phase2technology.com/pattern-lab-taking-our-workflow-from-a-..."></script>

1

<script src="../../../../js/script.js"></script>

Shared assets folder structure

How This Helps

All a web page needs is HTML for its content, CSS for styling that content, and JavaScript for any interaction behavior. We’ve now got two ways to make HTML: programming the CMS (which takes a lot of time) to be able to let non-technical content editors create and edit content that generates the HTML, or using Pattern Lab to write “HTML shorthand” much, much quicker. Both of those environments are linked to the same CSS and JavaScript, effectively sharing the styling and interaction behavior to both our CMS and Pattern Lab.

Now, most of the time we’re not working with our clients just to make style guides and prototypes; we’re making complex CMS platforms that scale in some really big ways. Why would we want to waste time creating the HTML twice? Well, sites this big take time to build right. Remember that the front-end developers are usually waiting for the back-end developers to program in the functionality of the CMS, which ends up creating the HTML, which the front-end developers style by writing CSS & JS.

All we need is some HTML to work with so we know our CSS and JS is working right. We don’t care if it’s editable to content editors at this point, we just want it to look like the comps! Now that front-end devs have an environment in Pattern Lab with real HTML to style and script, we can bring the comps to life in the browser with the added benefit of CSS & JS being immediately available to the CMS theme. We are effectively un-blocked, free to work outside the constraints of a back-end bottleneck. This shift from a linear process to one where back-end and front-end development can happen concurrently in a parallel process is a major step forward. Obvious benefits include speed, less re-work, clarity of progress, and a much earlier grasp on UI/UX issues.

The New Workflow

Parallel iterative process

With our style guide sharing CSS & JS with our CMS theme, we can pull up Pattern Lab pages exposing every button – and every size and color button variation – and write the CSS needed to style these buttons, then open up our CMS and see all the buttons styled exactly the way we want. We can get an exhaustive list of each text field, select box, radio button and more to style and have the results again propagate across the CMS’s pages. Especially when armed with a knowledge of the HTML that our CMS will most likely output, we can style components even before their functionality exists in the CMS!

As the back-end functionality is programmed into the CMS, HTML classes used in the Pattern Lab prototype are simply applied to the generated HTML to trigger the styling. It doesn’t matter too much if back-end or front-end start on a component first, as this process works in either direction! In fact, design can even be part of the fun! As designers create static comps, the front-end devs implement them in Pattern Lab, creating the CSS available in the CMS as well. Then, the Pattern Lab site acts to the designers as a resource that contains the summation of all design decisions reflected in a realistic environment: the browser. The designers can get the most up-to-date version of components, like the header for their next comp, by simply screen shooting it and pulling it into their app of choice. This frees the designers from the minutia of ensuring consistent spacing and typography across all comps, allowing them to focus on the specific design problem they’re attempting to solve.

When designers, front-end developers, and back-end developers are iteratively working on a solution together, and each discipline contributes their wisdom, vision, and guidance to the others, a very clear picture of the best solution crystallizes and late surprises can often be avoided.

This parallel process brings many advantages:

  • Front-end can start earlier – often before a CMS and its environment is even ready!
  • Easy HTML changes = quick iteration.
  • Back-end has front-end reference guide for markup and classes desired.
  • Pattern Lab acts as an asset library for designers.
  • Project managers and stakeholders have an overview of progress on front-end components without being encumbered by missing functionality or lack of navigation in the CMS.
  • The progress of each discipline is immediately viewable to members of other disciplines. This prevents any one discipline from going down the wrong path too far, and it also allows the result of progress from each discipline to aid and inform the other disciplines.
  • Shared vocabulary of components and no more fractured vocabulary (is it the primary button or the main button?). Pattern Lab gives it a label and menu item. We can all finally know what a Media Block objectively is now.

Conclusion

By decoupling the creation of a design system based in CSS and JavaScript (styling) from the process of altering the HTML that our CMS is generating (theming), we’re able to remove the biggest blocker most projects experience: dependence on the CMS for CSS & JS to be written. We avoid this bottleneck by creating a nimble environment to build HTML that allows us to craft the delieverables of our design system: CSS & JS. We’re doing this in a way that provides these assets instantly to the CMS so the CMS can take advantage of them on the fly while the site is being built concurrently, iteratively, and collaboratively.

May 16 2017
May 16

In 1994, I was a huge fan of the X-Men animated series. I distinctly remember an episode titled “Time Fugitives”, which featured Cable, a time-traveling mutant vigilante from the future, talking to a floating cube that gave him historical information about the X-Men of the past. I never thought that technology would exist in my lifetime, but I found myself a week ago sitting in my living room asking my Google Home (which resembles an air freshener rather than a cube) questions about historical context.

Conversational UI’s - chatbot and voice assistant technologies - are becoming commonplace in consumer’s lives. Messaging apps alone account for 91% of all time people spent on mobile and desktop devices. Soon, almost every major smartphone and computer will be equipped with Siri, Google Assistant, Cortana, or Samsung’s Bixby. These voice assistants are even being integrated into common home electronics - televisions, set-top boxes, video game units, and even washing machines and refrigerators. Sales of home personal assistants are on the rise, with the Amazon Echo alone having increased sales nine-fold year over year. Search giants Google and Microsoft are reporting significant increases in voice searches, each claiming about 25% of mobile searches are now performed using voice.

Graphic from e-consultancy that shows the Google trends from 2008 - 2016Source: Should financial services brands follow Capital One on to Amazon Echo?, E-Consultancy, 2017

The trends are clear - conversational UI’s are only becoming more prevalent in our lives, and some predict will replace common technologies that we use today. And in order to continue to engage audiences wherever they are, in the way they prefer to engage, companies should be investing in developing apps that leverage these technologies at home and in the workplace.

Benefits of Building Applications for Conversational UI’s Now

While you may question the business benefits of developing applications that leverage conversational UI’s at such an early stage in the maturation of this technology, there are some clear benefits that come with being on the leading edge of leveraging new technologies to engage consumers:

Early adoption can lead to great PR

Standing on the leading edge and developing applications for these emerging platforms can present a great opportunity to earn publicity, and position your organization as an innovative brand. An example of this can be seen in this eConsultancy article about CapitalOne’s Amazon Echo skill.

You can test new market opportunities

Conversational UI’s may present an opportunity to engage with a market that your organization is not currently. You may identify opportunities to gain new customers, improve customer satisfaction, or create new revenue streams by extending existing products and services into platforms with voice and chat interfaces. Some companies are already starting to offer paid tiers for services delivered via or selling advertising on conversational UI applications.

Early adoption can provide a competitive advantage

While being first to market with a conversational UI app is not always a guarantee of success, it can provide you a leg up over the competition. If you start early, you will have an opportunity to identify best approaches to engage consumers on these new platforms, allowing you to have a well-defined experience once your competitors enter the market. Your brand may also be able to secure a percentage of the market share early due to a lower cost of user acquisition.

US consumers are creatures of habit, and prefer to go back to familiar stores, products, and services they trust. In an ideal scenario, your conversational UI application will become integrated into consumer’s work and or home life before the market is saturated.

Potential Drawbacks

In all fairness, developing a conversational UI application is not easy. There are some risks associated that we would be remiss if we did not inform you of:

  • This is still the wild-wild west - very few best practices or standards have been established.

  • It can be expensive to develop and implement, across the myriad of devices/services.

  • There is a potentially high learning curve depending on the platform you are building for and technologies you use to develop your app.

  • At this time, there are no clear methods for efficiently testing features on voice assistant applications.

  • Deployment of content to this various platforms may require use of many different CMS systems.

While there are risks associated with this starting to leverage conversational UI applications, the long-term benefits may outweigh the short-term losses.

Stay tuned for part 2, where we will discuss how you can start leveraging conversational UI applications to build your brand and grow your business.

May 16 2017
May 16

With the recent launch of Penn State University’s main site and news site, we were able to help Penn State breathe new life into their outdated online presence, to allow prospective and current students alike to have the best experience possible on a wide array of devices. Working closely with the PSU design team, we created a complete experience from desktop to mobile, utilizing popular design patterns that would help guide the user experience while never fully stripping away content from the end user.

Utilizing the Omega Theme, we used the default media queries of mobile, narrow and normal or otherwise known as under 740px (mobile), under 980px (tablet) and anything else above (desktop). These media queries really helped the PSU design team explore the possibilities of what was possible at each one of these breakpoints and how fundamental elements can be optimized for the device that they are being displayed on. Most notable were, menus, search, curated news boxes, and featured article headers were all areas where the PSU designers and Phase2 teamed up to bring out the best experience for each breakpoint.

 Menus:

Whether we are talking about main menus, secondary, or even tertiary, all menus have their place and purpose for guiding the user through the site to their destination. The PSU design team never forgot this principal, and substantial effort went into making sure menu items were reachable at all breakpoints. While the main menu follows standard design patterns, the desktop to tablet change is just a slightly more condensed version of the original, and made to optimize the horizontal space of a tablet in portrait mode. Moving down to mobile, we made the biggest changes. The main menu collapses to a large clickable/tap-able button that reveals/hides a vertical menu with large target areas, optimized for mobile.

The secondary menu also behaves in a similar fashion to the main menu by collapsing down to a larger clickable button that reveals menu items also enlarged in order to visually gain appeal while also providing a larger area for users to click on. The transformation happens earlier at the tablet level as we felt that the condensed horizontal space would make the tex-only menu items harder to read and more difficult to click on for smaller screens.

Search:

Search was another component that Penn State needed to emphasize  throughout the site. It was very important to leave this as simple as possible, so like the menus, it was decided to collapse the search, for mobile only, into a drawer reveal that focused on simplicity and a large focus area. Again, we went with a large icon that helped by having a large target area for the mobile and tablet experience.

Curated news boxes:

On the homepage, the curated news boxes provided a fun canvas to work with content that shifts around as the device changes from desktop to mobile. Knowing that space is limited in the mobile realm, it was important to provide something visually pleasing, but that would also still engage the user to click through a news story. So iconology was used to capture the specific type of news piece while the title was left to engage the user into clicking through to the story.

Mobile curated boxes

Tablet Curated Boxes

Featured Article Header:

Imagery was crucial to the PSU redesign strategy. It was only natural to have engaging treatments to the featured article headers. If the article header implemented a slideshow, we used flexslider. Otherwise, simple css scaled the images per breakpoint. The meta description related to the image would truncate and shift around depending on the breakpoint for better readability and appearance.

By implementing responsive design patterns, we were able to help the PSU team achieve their goal of making their online content and news accessible by any device.

May 16 2017
May 16

No doubt you’ve heard the phrase “Content is King.” But what exactly is content? The precise definition is subjective – it is influenced by the context in which it is defined. There is no universal definition within the industry, and it is highly likely there is no single definition within your organization.

To have a successful content strategy, it is critical that your organization determines precisely what content means to you, as its definition will inform your entire editorial experience.

An Efficient Editorial Experience

When designing editorial experiences, there is inherent friction between system architecture and user experience. The more complex the structure, the less usable the editorial experience of your CMS becomes. Content strategists strive to follow best practices when modeling content, but these object-oriented models do not take into account the workflow of tasks required to publish content.

Modern content management platforms offer organizations a variety of entities used to build an editorial experience – content types, taxonomies, components, etc. Although editors and producers learn how to use them over time, there can be a steep learning curve when figuring out how to combine these entities to perform tasks, like creating a landing page for a campaign. That learning curve can have two adverse effects on your websites:

  1. You lose efficiency in the content creation process, leading to delayed launches and increased costs.

  1. Incorrect use of the CMS, resulting in increased support costs of ownership.

Content Management Best Practice: Focus on Tasks

Avoid these risks by designing task-based editorial experiences. Task-based user interfaces, like Microsoft Windows and Mac OS X, present quick paths to whatever task your content creator wants to accomplish, rather than allowing the user to plot their own path. The greatest efficiencies can be gained by creating a single interface, or multistep interface, for accomplishing a task. Do not require the user to access multiple administrative interfaces.

To enable this set-up, perform user research to understand how content is perceived within your organization and how users of your CMS expect to create it. This is easily done by conducting stakeholder interviews to define requirements. Our digital strategy team has also found success in following practices found in the Lean methodology, quickly prototyping and testing editorial experiences to validate assumptions we make about users’ needs.

To ensure the success of your content operations, define the needs and expectations of the content editors and producers first and foremost. Equally important, prioritize tasks over CMS entities to streamline your inline editorial experience for content producers and editors.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web